The modified images of artificial intelligence system disturb photographers

It is easy to answer a few questions on the internet. If I asked, is it a cake? The answer may be: Cut it with a knife and you will see. If you ask, should these ingredients mix? The answer may be: Put it in the machine and discover it. But if the question is, “Did artificial intelligence do it?” It’s harder. You may not think so. Obviously, an image created using an instrument such as “Midjourney” or “Dall-e” from “Openai” should be described as “made with artificial intelligence.” In these cases, the only human effort is needed to represent it and write the matter through a text of the services of the AI. ‘Made with artificial intelligence’, but there’s a more accurate example I thought about. Veteran photographer Matt Souss has started at 03:30 in the past few days and has taken his position in front of a crowd of tourists to take a photo that takes sunrise over Caneon Lands National Park in Utah. Mix several images in the post -production stage to achieve the ideal level of optical exposure. Then use the “Adobe Photoshop” program to remove a small but inappropriate dust spot. Also read: The artificial intelligence sector is needed to fight the ways to fight deep forgery and the final result was, as follows, clear. But shortly after it placed it on “Instagram” and “Thrids”, the image made an automatic classification “by artificial intelligence” due to the use of obstetric packaging. Souss described it as frustrating. He told me, “I think it gives the average user, that is, the ordinary person, the impression that the whole thing was made with a look at a face” (using an artificial intelligence to create the image). The “Mita platforms” have developed its artificial intelligence classification policy after the council notices the need to better inform users about the content subject to amendment, even if the purpose is not necessarily deception. The head of the content policy in “Mita”, Monica Baker, wrote that the company consulted with “more than 120 relevant authorities in 34 countries in every major region in the world.” The system remains primitive, it depends on self -reports or descriptive information attached by photo editing programs when using artificial intelligence. A heavy classification now, and after the system has been launched, some in the field of professional photography feel heavy. Matt Grokott of Petapixel, an independent, groundbreaking news website in the field of photography: “The fact that Instagram, which can be said to be the most important photography platform, weakens the authenticity of the photographers by voluntarily or hatred the artificial intelligence classification is a humiliating and talented matter.” Also read: Mita recognizes the exploitation of users’ images to train artificial intelligence systems, but perhaps it is in your mind that mites have already used artificial intelligence to adjust his image so that the classification is considered fair, and if sauce did not want this classification, he should have used artificial intelligence at all. But why there is no classification of editing types that use “Photoshop” technology, especially because the techniques to improve, edit photos or restore it for almost two centuries, and stylistic adjustments are widely acceptable, except for the images used in most press photography. In the light of artificial intelligence, a new dialogue on the border between what is real creativity is and which is artificial. Sauce feels that the classification of ‘made with artificial intelligence’ is aggressive and misleading. I agree with him. Despite everything, there is no artificial intelligence that can run through a national park on behalf of Souss. The smaller of the two evils is doing well with the “Mita” classification policy, at least at the moment. A publication in a blog for the company’s head of global affairs, Nick Clig, indicated this year that the current approach will be valid “during the next year, as many important elections around the world will be held.” In other words, excessive classification of the photos of photographers and their anger is the easiest of the two evils, as the other is angry with allowing false pictures to influence the election around the world. The errors made in the period before the US presidential election in 2016 are damaging the company to this day. Also read: How does the creative hallucinations for artificial intelligence benefit? Meta is expected to look back on her policy and make changes to it in 2025, but users do not have to suspend the hope of reaching a solution that satisfies everyone or even crowns. It is very easy these days for those who want to deceive to bypass these disclosure methods, but there is still value to find a better way for new artists to be transparent with their fans. One of the solutions for sauce is a gradual scale, where some artificial intelligence adjustments (such as removing a simple defect) are added to the descriptive information of those interested in the investigation, but without releasing the stigma with the help of artificial intelligence. “Cat and Mouse” game There is hope to find a better way, do not rely on the honesty of people on the internet (ha!). Meta and other businesses are looking for sophisticated ways to discover when artificial intelligence is used by analyzing only the image itself. Such a system can discover or have something fully created by artificial intelligence, or have only a partial change, such as sunrise in the form of mites. “Mita is working hard on this approach, although it is a” cat and mouse “game. And since artificial intelligence has become more intelligent in creating images, the discovery of when this technique is used will become more difficult. The answer to moral questions will not be much easier.