The CEO of the “Alphabet” Sander Pachay sent NE -mail to employees yesterday to process the ambiguous answers of the Gemneyi “Gimenai” engine, which describes it as “absolutely unacceptable”. Pichai wrote in his memo, seen by “Bloomberg News” that the teams are currently working to fix problems all the time. The CEO of both Alphabet and Google emphasized the need for the business to provide neutral and accurate information, with the emphasis on structural amendments to prevent similar incidents. Inaccurate photos The ‘Gimenai’ application – previously the groundbreaking artificial intelligence product for ‘Google’, but the company was criticized for giving photos that showed historical inaccurate scenes when asked to generate photos of people. The company in Mountain Vio, where its headquarters in the state of California stopped accepting the requests for images, while working to address the concerns raised. Bachhai’s structural amendments: “We will take a clear set of measures, including structural changes, updated guidelines for the product, improved launch, strong investigation, technical assessments of external teams and technical recommendations.” He insists that the focus “insisted on useful products that are worthy of the user’s trust.” The revival of interest in artificial intelligence and its use has lit a wave of close audit, as many critics have indicated the possibility of generating misleading content through artificial intelligence, either deliberately or accidentally. Also read: “Google” introduces free artificial intelligence instruments to improve internet security. We look at the full memorandum sent from Pachai to the “Google” employees, which was first reported by “Simavor” News Website: I would like to address recent problems with the answers of texts and ambiguous images in the “Gimenai” application (formerly cold). I know that some of the answers insulted our users and showed a prejudice, to be clear, this matter is absolutely unacceptable and we asked the problem. Our work teams work all day to address these problems. We already see a tangible improvement in a wide variety of indoctrination orders. There is no ideal artificial intelligence, especially at this emerging stage of the development of the sector, but we know that quality standards are great for us, and we will continue to work, whatever. We will see what happened and make sure you correct it in a big way. Our mission to organize and make the world’s information available to everyone and useful is a sacred task. An extensive overview that we always try to provide users useful, accurate and impartial information through our products. For this reason, people trust them. This must be our approach to all our products, including emerging artificial intelligence products. We will take a clear set of measures, including structural adjustments, updated product guidelines, improved launch, strong investigation, technical assessments of external teams and technical recommendations. We are investigating the matter from all sides and we will make the necessary changes. Even when we learn from the error here, it is also supposed to build on the product and the technical information we have issued over artificial intelligence over the past few weeks. This includes some fundamental developments in our basic models, such as making significant progress on the understanding of the long contexts that one million elements and our open models means, and both received approval. We realize what is needed to create wonderful products used and loved by billions of people and businesses, and thanks to our infrastructure and research experience, we have an excellent starting point for the flurry of artificial intelligence. Let’s focus on what is more important in designing useful products that are worthy of our users’ trust.
Google’s president criticizes the ‘unacceptable’ failure in the resurrection of ‘Gimenai’ for photos
