(maadaa AI News Weekly: May 21~ May 27)
1. LinkedIn Copilot to Feed AI Models with Real-World Career Insights
News:
Microsoft reportedly plans to integrate its AI-powered Copilot technology into LinkedIn, aiming to enhance professional networking and job search capabilities on the platform. This initiative, dubbed “LinkedIn Copilot,” aligns with Microsoft’s broader strategy of leveraging AI across its product suite to streamline operations and boost productivity.
Key Points:
- LinkedIn Copilot will likely leverage AI to automate tasks and provide personalized networking and job-searching recommendations.
- Microsoft’s existing Copilot integrations, built on OpenAI’s GPT models, have automated tasks like email summarization and cybersecurity threat analysis.
- The 2024 Work Trend Index highlights the growing adoption of AI in the workplace and the demand for AI tools to manage workloads effectively.
Why It Matters?
The LinkedIn Copilot initiative aims to enhance AI training datasets using LinkedIn’s professional data, leading to more accurate user recommendations and a deeper understanding of workplace trends. Integrating Microsoft’s Copilot technology with LinkedIn’s data could enable continuous learning and improvement of AI models. This symbiotic relationship could create a virtuous cycle of understanding and supporting the professional world.
2. AI Giants Court Hollywood for Video Training Data Goldmine
News:
Major tech giants like Alphabet, Meta, and Microsoft are negotiating licensing deals with Hollywood studios to access their content for training advanced AI video generation models. These models can create realistic scenes from text prompts, promising cost reductions for filmmakers while raising concerns over intellectual property control.
Key Points:
- Alphabet, Meta, and Microsoft are offering substantial sums to studios for licensing content to train AI video generation tools like OpenAI’s Sora and Alphabet’s Veo.
- Studios are cautious about licensing without retaining ownership and regulation over content usage, as seen with Scarlett Johansson’s objection to OpenAI using her voice.
- While some studios are open to limited licensing, others, like Disney and Netflix, are interested in collaborations rather than full content licensing.
Why It Matters?
Securing licensing deals with major studios would provide tech companies with a vast trove of high-quality video content to train their AI models, significantly enhancing their capabilities in generating realistic and diverse visuals from text descriptions. This would not only drive innovation in AI video generation but also open up new avenues for content creation, potentially reducing production costs and enabling more immersive storytelling experiences.
3. Call Arc: The Future of Conversational AI Search Experiences
News:
The Browser Company has introduced a new AI-driven feature called Call Arc for its Arc Search app, allowing users to ask questions and receive instant voice responses by making a phone call. This novel approach aims to provide a more engaging and efficient way to obtain information on the go, positioning itself as a fresh take on voice search.
Key Points:
- Call Arc enables users to hold their phone to their ear and ask questions, receiving voice responses from the AI.
- It is designed to answer immediate and minor queries swiftly, making it ideal for situations like cooking or multitasking.
- The feature offers an animated smiley face on the screen, with the mouth moving as it delivers audio answers.
- Arc Search, introduced in January, includes a ‘Browse for me’ function that generates web pages with relevant information for queries.
Why It Matters?
The introduction of Call Arc can boost conversational AI training by providing a wealth of natural language queries and interactions from voice-based interfaces, capturing more conversational patterns than typed queries. User feedback enhances AI responses, and the popularity of Call Arc generates extensive data for model improvement.
4. Additional News:
1. A new study suggests that eye scans powered by artificial intelligence could detect Parkinson’s disease years before symptoms appear.
2. Meta is now considering paying news publishers for content to train its AI tools, marking a shift in its approach to enhance training data quality.
3. According to a report by consulting firm Strategy&, the Middle East could see a yearly revenue of approximately $24 billion from Generative AI by 2030.
4. TikTok announced the launch of “TikTok Symphony,” a generative AI suite for brands, to assist in scriptwriting, video production, and enhancing existing assets in its ads business.
5. Meta’s AI research lab introduced Chameleon, a new family of ‘early-fusion token-based’ AI models that can understand and generate text and images in any order.
Shared Open & Commercial Datasets
Open Dataset #1: AVA Dataset
This dataset annotates videos from movies to identify actions in the videos provided by the Google Research team. It is often used in computer vision research to train models to recognize human actions in video.
https://research.google.com/ava/
Open Dataset #2: MS MARCO (Microsoft MAchine Reading COmprehension Dataset)
This is a large dataset for natural language understanding and search applications used to train models to answer questions posed in natural language.
https://paperswithcode.com/dataset/ms-marco
Commercial Dataset #1: Multi-modal Generative AI Large Datasets — Licensed
maadaa.ai’s large dataset is specially developed for state-of-the-art multi-modal large language models, including various structured datasets like image-text pairs, video-text pairs, and e-book in markdown. Following the rules of international copyright authorization, this large dataset ensures the infusion of authenticity and diversity into Generative AI model training, propelling Generative AI models towards unprecedented accuracy and innovation.
https://maadaa.ai/datasets/GenDatasetDetail/Multi-modal-Generative-AI-Large-Datasets---Licensed
Commercial Dataset #4: Video Highlight Moment Dataset
This dataset consists of video clips collected from the Internet with an average length of around 10s and a resolution of over 720 x 1280. The volume is about 9K.
The video clips are divided into more than ten major categories: sports, fast and slow sports, anaerobic sports, food, travel, daily, etc. Each major category is further divided into several subcategories. The above video highlight moments are labeled with millisecond accuracy without overlap.
https://maadaa.ai/datasets/DatasetsDetail/Video-Highlight-Moment-Dataset
Source:
1. https://www.businessinsider.com/microsoft-ai-linkedin-copilot-document-2024-5
2. https://finance.yahoo.com/news/alphabet-meta-engage-hollywood-studios-133103166.html?utm_source=www.neatprompts.com&utm_medium=newsletter&utm_campaign=partnering-with-hollywood3. https://techcrunch.com/2024/05/23/arc-searchs-new-call-arc-feature-lets-you-ask-questions-by-making-a-phone-call/?guce_referrer=aHR0cHM6Ly93d3cucGVycGxleGl0eS5haS8&guce_referrer_sig=AQAAABCzrp488XDlEzBVOt7Qe5fXM4R4CuqXm2NhQTDXLrmcqJ8qKXr7aIpKU2R3-yYmL_tvvn-Ujl_mJ8I89DXvOFb7llWWGB1WgCEyMWyJRqmpbCQH3lnDXAiU88IeqvZtpyl8jpD3PFAPikMaB-ZX092HfttDu0diWPX7yX711O_L&_guc_consent_skip=1716797503