Invidia invests in a beginning focusing on improving the search in video

Invidia participates in an investment of $ 50 million in twelve laboratories, in a bet on two engineers born in Korea, who want to help users quickly search for groups of videos and analyze them. The initial business announced in a statement that the American Enterprise Associa and current and current, including ‘Radical Ventures’, ‘Index Ventures’ and ‘Korea’ Korea ‘Korea’ and ‘Index Ventures’ and ‘Korea Investment’ Korea ‘Korea’ investment partners participated. The emerging artificial intelligence companies of “embrace of the face” to “Mistral Ai”. Francisco who offers basic models to perform a variety of tasks, such as building chat robots or language translation. An accurate research was erected in 2021 ‘Tawalf Labs’ after the founders met the two participants, Jay Lee and Aidan Lee, during the basic military training in their homeland, Korea. The clients include the influence of social media, sports championships in the United States and Europe, and Hollywood Film Studios, some of which have an archive of 75 years. The upcoming company aims to make the searches easier by repairing exact moments within a sea of ​​online content- for example, when a certain footballer celebrates through the leading yearning, or the times in which Gordon Ramsey is more angry than the necessary eggs. Also read: Intel attempts to compete with ‘Invidia’ to lead the artificial intelligence chips. “Videos have been in the field of artificial intelligence for decades. It is full of information and is difficult to take advantage of,” Jay Li, who is also the CEO of the company, told Bloomberg News. He added that “about 80% of the world’s data is present in the video. For us, the video is the first language and we built our technological basis.” More users target “Tawalf Labs” collaboration with “Invidia” to place the platforms “Maringo” and “Pegasos” before more users. He told me that unlike other models that work mainly with the text, they started practicing on videos, which in turn helps to facilitate visual searches. The artificial intelligence model works with video, text, image and audio, enabling research through various types of data inputs, such as converting the text into a video, turning the text into an audio and turning a picture into a video. Also read: The upcoming Silicon Valley businesses are looking for a role in the field of intelligence. The CEO said: “We started our work before the rise of multimedia. We started before the constituent models became a wonderful thing.” ‘Tawalouf Labs’ pointed out that the models are used by more than 30,000 developers in industries such as media, entertainment, advertising, cars and security. They use their models to search for video and to create summaries. The boot expects the number of employees to double to about 80 in 2024. Pegasus model is the latest model for startup “Begasos”, which generates a text from the video, is now in the experimental test stage. It is designed to understand and examine the complex video content, which helps to summarize and investigate and find answers and analysis. “Tawalf Labs” at the same time trains several parts of the foundation model, reducing its size to about one -fifth compared to what the sizes of models were initially. This, in turn, increases the computer and energy efficiency. CEO Li believes that the developments that have handled videos just as easy as the text, “without the need to spend a lot of money.”