Chat Robot Scandal in “Mita” caused by the absence of “safety culture”

If you are going to call one slogan of the slogans of the business, “Meta platforms” at this moment, it is more likely to “move quickly and if you break things.” At this stage, her CEO, Mark Zuckerberg, may regret the existence of this slogan, but there is a lot of evidence that he still accepts the idea of ​​doing damage on his way to success, as well as his business. One of the most recent examples of an investigation of “Reuters”, he found that “Mita” allowed her chat robots, among others, “to involve a child in romantic or sensory conversations.” This report was a topic last week in the highs of the Senate on the safety risks that make these robots on children – and the severity of the mixing of artificial intelligence with the cultures of toxic businesses. How do you raise children in the era of artificial intelligence? The “Mita” chat robot scandal shows a culture that is ready to sacrifice the safety and well -being of users, even children, if it helps improve the orientation of artificial intelligence. Supporters of this technology, including Zuckerberg, believe it has boundless abilities. But they also agree that this is so, as Mita CEO said, “It will raise new safety issues.” One of the reasons for managing the risk of artificial intelligence systems is that it is based on the possibilities of its nature. This means that even minor changes in its input can make significant changes in its outputs. It makes the prediction of her behavior and controls it very difficult. What is the culture of safety? Here lies the importance of ‘safety culture’. In businesses that have a safety culture, safety is always the biggest priority. Everyone in the institution has the absolute right to raise his concern about safety, regardless of his experience or how difficult or the cost of solving the problems. If you know exactly what the system is going to do, you can push it to the abyss. But with unexpected technology such as artificial intelligence, businesses should be careful and avoid gray areas. This level of caution is the product of culture, not the official rules. Parmy Olsen: Illon Musk’s silence on the risks of artificial intelligence is confusing that Boeing had such a culture. For example, while building a “Boeing 707” aircraft, the head of test pilots, Tex Johnston, high -cost re -design of the aircraft’s tail and the funeral of correcting the instability that can happen if the pilot exceeds the maximum milan angle of the plane. What was the reaction of the Chief Engineer? “We’ll correct that.” Boeing carried the full cost of change, instead of downloading her clients. After decades, the excessive “Boeing” concentration reduced the cost of erosion of this focus on safety to the point that critical security defects were ignored in the “737 Max -8” plane until two aircraft crashed and 346 people died. He describes a child as an attractive Reuters report, a window of not represented by a safety culture. It contains content from the “Meta” document entitled “General AI: Standards of Content Risk”, and it says explicitly that an artificial intelligence conversation can describe a child with phrases showing his attractiveness “or a person with colon cancer in the fourth phase that it is usually treated by the stomach with the healing quartz crystals.” “Mita” checked the document after asking her Reuters about her, but that’s not important. The documents do not make a culture, but rather the product of a culture, and whoever accepts to harm its users in the pursuit of growth or profits, makes the dangerous result inevitable. “Mita” is close to folding the free artificial intelligence programs for artificial intelligence. “Mita” chat robots will not be safe unless the company complies to solve its culture. What will this effort look like? One of the models could be the transformation made by the CEO of the “Anglo American” Senate Carroll at the Giant Mining Company in South Africa from 2007 to 2013. When Carroll held her post, the average death in the company was 44 each year. By the time it was carved, this number fell by 75%. Her attempt to change is a golden standard that it is taught in business colleges around the world. Carroll started closing the platinum mine of the company’s Rustinburg and training everyone who works there. The largest platinum mine was in the world and was in the first months of five deadly accidents witness of her work as executive. The closure costs ‘Anglo American’ $ 8 million a day, and it is a large amount, even according to the standards of a company ‘Anglo -American’. This was a clear indication of the whole business. In the end, the words are cheap, and any executive president can say, “Safety is our first priority” and the employees who have heard these words before. But awarding $ 8 million a day was an expensive signal, and therefore reliable. Carroll supported it by continuing the pressure for another six years and putting safety in the heart of everything, from reformulating the standards of promotion and remuneration to relations with trade unions and the government. An important connection to the front can do Zuckerberg to do something similar. He must start freeing the introduction of Smart Chat robots from “Mita” so that any child is used to use it completely safely. (I think most people agree that the protection of children from harassment against artificial intelligence is minimal). This can place a real power behind this by pushing strict government regulations on Smart Chat robots, and strict fines for its violation. Meta can redirect wages and promotions so that the integrity of artificial intelligence, not its use or profitability, is the most important factor in determining employee rewards. ‘Meta’ investors must be careful for the excessive confidence of Zuckerberg if you find it difficult to suggest that the CEO ‘Mina’ will do something of this, so it may be because of its costs in the short term. As for the long term, I see that ‘death’ will benefit. For example, let us take the famous situation in which the company “Johnson & Johnson” pulled Tilanol out of the market after poisoning some of its bottles. In the short term, it cost the company millions of dollars. As for the long expensive, it has established its reputation as a reliable and even beloved business, and it is a commodity that does not buy money. It should also be noted that companies cannot stay without a social license to work; That is, without acceptance of the audience. It’s hard to think about a better way to lose it from rogue artificial intelligence at risk. Then there is a fierce struggle for artificial intelligence talents. If you are a prominent scientist in the field of artificial intelligence, is it probably not that you choose a working authority that encourages you to give preference to ethics and safety in your work? At the moment when every major business faces artificial intelligence, the assumption of safety initiative is the best way to have ‘Mita’ for the leadership of the field of artificial intelligence.

Exit mobile version