<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ai &#8211; NewsProteine-bio </title>
	<atom:link href="https://www.proteine-bio.com/tags/ai/feed" rel="self" type="application/rss+xml" />
	<link>https://www.proteine-bio.com</link>
	<description></description>
	<lastBuildDate>Mon, 16 Feb 2026 04:24:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.1</generator>
	<item>
		<title>Google’s Duolingo AI Conversation Practice Powered by Gemini API.</title>
		<link>https://www.proteine-bio.com/biology/googles-duolingo-ai-conversation-practice-powered-by-gemini-api.html</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 16 Feb 2026 04:24:43 +0000</pubDate>
				<category><![CDATA[Biology]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[duolingo]]></category>
		<category><![CDATA[practice]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/googles-duolingo-ai-conversation-practice-powered-by-gemini-api.html</guid>

					<description><![CDATA[Google and Duolingo have teamed up to bring a new AI-powered conversation practice feature to...]]></description>
										<content:encoded><![CDATA[<p>Google and Duolingo have teamed up to bring a new AI-powered conversation practice feature to language learners. This tool uses Google’s Gemini API to help users practice real-life speaking scenarios. The update is now live in the Duolingo app for select languages.   </p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s Duolingo AI Conversation Practice Powered by Gemini API."><br />
                <img fetchpriority="high" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/75ababed637f4c41920f0bc85b6ecffb.jpg" alt="Google’s Duolingo AI Conversation Practice Powered by Gemini API. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s Duolingo AI Conversation Practice Powered by Gemini API.)</em></span>
                </p>
<p>The new feature lets learners talk with an AI conversation partner that responds naturally. It listens to what users say and gives instant feedback on pronunciation, grammar, and word choice. The system adapts to each learner’s level so conversations stay challenging but not overwhelming.  </p>
<p>Duolingo says this update marks a big step in making language practice more interactive. Users can now simulate everyday situations like ordering food or asking for directions. The AI mimics how native speakers talk, including pauses, filler words, and casual phrasing.  </p>
<p>Google’s Gemini API powers the intelligence behind these conversations. It processes speech quickly and understands context well. That helps the AI keep chats flowing smoothly even when users make mistakes. The technology also learns from each interaction to improve future responses.  </p>
<p>Both companies believe this collaboration will help millions of learners gain confidence in speaking. Duolingo has over 100 million monthly active users, many of whom struggle with real-world conversation. The new tool offers a low-pressure way to practice without fear of judgment.  </p>
<p>Early testing shows users spend more time practicing when they can talk with a responsive AI partner. Duolingo plans to expand the feature to more languages in the coming months. The company will also add new topics and scenarios based on user feedback.  </p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s Duolingo AI Conversation Practice Powered by Gemini API."><br />
                <img decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/923b1f491facbf84b081ccd4b98e4624.jpg" alt="Google’s Duolingo AI Conversation Practice Powered by Gemini API. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s Duolingo AI Conversation Practice Powered by Gemini API.)</em></span>
                </p>
<p>                 This integration highlights how AI can support education in practical ways. It gives learners immediate, personalized support whenever they need it.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s Manufacturing AI Vision Inspects Defects at Scale.</title>
		<link>https://www.proteine-bio.com/biology/googles-manufacturing-ai-vision-inspects-defects-at-scale.html</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 15 Feb 2026 04:25:35 +0000</pubDate>
				<category><![CDATA[Biology]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[system]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/googles-manufacturing-ai-vision-inspects-defects-at-scale.html</guid>

					<description><![CDATA[Google has launched a new AI-powered vision system designed to spot defects in manufacturing. The...]]></description>
										<content:encoded><![CDATA[<p>Google has launched a new AI-powered vision system designed to spot defects in manufacturing. The system uses advanced machine learning to inspect products at high speed and with great accuracy. It can identify tiny flaws that human eyes might miss. This helps factories catch problems early and reduce waste. </p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s Manufacturing AI Vision Inspects Defects at Scale."><br />
                <img decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/720bd53185b3de20bf9f7477c288477a.jpg" alt="Google’s Manufacturing AI Vision Inspects Defects at Scale. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s Manufacturing AI Vision Inspects Defects at Scale.)</em></span>
                </p>
<p>The technology works by analyzing images from cameras placed along production lines. It compares each item against a set of known good examples. If something looks wrong, the system flags it right away. This allows for real-time quality control without slowing down output.</p>
<p>Manufacturers using the system report fewer errors and faster inspection times. One pilot customer saw a 30% drop in defective parts after adopting the tool. Google says the system adapts quickly to different products and environments. It does not need major changes to existing factory setups.</p>
<p>The AI model was trained on millions of product images from various industries. This wide range of data helps it recognize defects across many types of goods. It also learns from new examples over time, getting better as it goes.</p>
<p>Google built the system with privacy and security in mind. All image data stays within the customer’s own systems unless they choose otherwise. The company offers support for integration with common industrial platforms. This makes it easier for factories to start using the tool without long delays.</p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s Manufacturing AI Vision Inspects Defects at Scale."><br />
                <img loading="lazy" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/3606823a2f56c6f19dbd6ec15d5ac810.gif" alt="Google’s Manufacturing AI Vision Inspects Defects at Scale. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s Manufacturing AI Vision Inspects Defects at Scale.)</em></span>
                </p>
<p>                 Factories today face pressure to deliver high quality while cutting costs. Automated visual inspection helps meet both goals. Google’s new offering gives manufacturers a practical way to add smart quality checks without heavy investment.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s AI Feature Language Support Currently Limited for Video Generation Tools.</title>
		<link>https://www.proteine-bio.com/biology/googles-ai-feature-language-support-currently-limited-for-video-generation-tools.html</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 14 Feb 2026 04:28:44 +0000</pubDate>
				<category><![CDATA[Biology]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[video]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/googles-ai-feature-language-support-currently-limited-for-video-generation-tools.html</guid>

					<description><![CDATA[Google has added new language support to its AI tools but video generation features remain...]]></description>
										<content:encoded><![CDATA[<p>Google has added new language support to its AI tools but video generation features remain limited. The company rolled out updates for text and image creation in more languages. However users who want to make videos with AI will find fewer options. Right now the video tools only work well in English. People using other languages may face errors or missing functions.   </p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s AI Feature Language Support Currently Limited for Video Generation Tools."><br />
                <img loading="lazy" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/80cede7d0f02031afa1a6d4f76b76463.jpg" alt="Google’s AI Feature Language Support Currently Limited for Video Generation Tools. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s AI Feature Language Support Currently Limited for Video Generation Tools.)</em></span>
                </p>
<p>This gap comes as Google pushes harder into generative AI. Its main rivals like Meta and OpenAI also offer multilingual support but focus mostly on text. Video is harder to handle because it needs more data and computing power. Google says it is working on expanding language coverage for video. No clear timeline has been shared yet.  </p>
<p>Users in non-English markets have noticed the difference. Some report that prompts in their native tongue do not produce good results. Others say the system ignores parts of their request. Google’s help pages confirm that full video support is still in progress.  </p>
<p>The company did not give reasons for the delay. Experts believe training video models in many languages takes time. It also requires large amounts of clean video data with accurate captions. That kind of data is not easy to collect for every language.  </p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google’s AI Feature Language Support Currently Limited for Video Generation Tools."><br />
                <img loading="lazy" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/572db14a464d9350fe09b07e1b8872b8.jpg" alt="Google’s AI Feature Language Support Currently Limited for Video Generation Tools. " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google’s AI Feature Language Support Currently Limited for Video Generation Tools.)</em></span>
                </p>
<p>                 For now people who rely on Google’s AI for video must use English. Those who need local language output might look elsewhere. Google continues to improve its systems but video remains a work in progress.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Zuckerberg Vows Major 2026 AI Push, Focused on Commerce with New “Agentic” Tools</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/zuckerberg-vows-major-2026-ai-push-focused-on-commerce-with-new-agentic-tools.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/zuckerberg-vows-major-2026-ai-push-focused-on-commerce-with-new-agentic-tools.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 01 Feb 2026 08:28:50 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[meta]]></category>
		<category><![CDATA[zuckerberg]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/zuckerberg-vows-major-2026-ai-push-focused-on-commerce-with-new-agentic-tools.html</guid>

					<description><![CDATA[Meta CEO Mark Zuckerberg revealed during an investor call on Wednesday that the company will...]]></description>
										<content:encoded><![CDATA[<div>Meta CEO Mark Zuckerberg revealed during an investor call on Wednesday that the company will roll out a new generation of AI models and products to users in the coming months. He stated, &#8220;In 2025, we rebuilt the foundation of our AI project,&#8221; and predicted that &#8220;the new year will continue to push the boundaries of technology.&#8221;&nbsp;&nbsp;</div>
<div><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/02/ba5575f19f6f0e4061910ca49e9b7137.webp" data-filename="filename" style="width: 471.771px;"></div>
<div>Although no specific timeline was disclosed, Zuckerberg emphasized that AI-driven commerce will become a core focus. He noted, &#8220;New intelligent shopping tools will help users accurately match their needs from a vast business catalog.&#8221; This statement aligns with the broader industry trend of exploring AI shopping assistants—Google and OpenAI have already established intelligent transaction platforms and secured partnerships with companies such as Stripe and Uber.&nbsp;&nbsp;</div>
<div></div>
<div>Unlike other AI labs that have built extensive technical infrastructure, Meta believes its unique advantage lies in its personal data assets. Zuckerberg explained, &#8220;We are witnessing the potential of AI to understand personal context, including history, interests, content, and social relationships. The value of intelligent agents largely depends on the unique contextual information they can access, and Meta is poised to deliver an irreplaceable personalized experience.&#8221;&nbsp;&nbsp;</div>
<div></div>
<div>This announcement signals Meta’s accelerated integration of AI technology into its social and commercial ecosystems, aiming to build a differentiated competitive advantage by combining personalized data with intelligent agent technology.</div>
<div></div>
<div>Roger Luo said:<span style="color: rgb(15, 17, 21); font-family: quote-cjk-patch, Inter, system-ui, -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen, Ubuntu, Cantarell, &quot;Open Sans&quot;, &quot;Helvetica Neue&quot;, sans-serif; font-size: 14px;">Meta is deeply integrating AI with social data to establish a moat in the agentic commerce space. However, whether its massive infrastructure investment can translate into a sustainable business model remains to be tested by the market.</span></div>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/zuckerberg-vows-major-2026-ai-push-focused-on-commerce-with-new-agentic-tools.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google enables seamless transition from AI Overviews to AI Mode</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/google-enables-seamless-transition-from-ai-overviews-to-ai-mode.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/google-enables-seamless-transition-from-ai-overviews-to-ai-mode.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 29 Jan 2026 00:25:59 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[search]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/google-enables-seamless-transition-from-ai-overviews-to-ai-mode.html</guid>

					<description><![CDATA[Google recently upgraded its AI search experience, now allowing users to directly ask follow-up questions...]]></description>
										<content:encoded><![CDATA[<p>Google recently upgraded its AI search experience, now allowing users to directly ask follow-up questions from the &#8220;AI Overview&#8221; on the search results page and seamlessly switch to &#8220;AI Mode&#8221; for multi-turn, in-depth conversations.</p>
<p></p>
<p style="text-align: center;">
                <a href="" target="_self" title="Google Logo"><br />
                <img loading="lazy" decoding="async" class="wp-image-48 size-full" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/8d0d67e76d605abd673c3be3a037a92d.webp" alt="" width="380" height="250"></a></p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Google Logo)</em></span></p>
<p>At the same time, the default model for AI Overviews worldwide has been upgraded to the more powerful Gemini 3.0.</p>
<p>This update aims to distinguish between simple queries and complex exploratory scenarios. Users can not only quickly obtain instant information such as scores and weather but also engage in natural conversations to delve deeply into various topics.</p>
<p><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/8d0d67e76d605abd673c3be3a037a92d.webp" data-filename="filename" style="width: 471.771px;"></p>
<p><p>Google stated that testing has confirmed that follow-up questions that preserve context significantly enhance the practicality of search, and the new design enables users to smoothly transition from brief summaries to deeper conversations.</p>
<p></p>
<p><p>
This update connects with the recently launched &#8220;Personal Intelligence&#8221; feature, which leverages users&#8217; personal data—such as Gmail and Photos—to enable the AI to provide personalized responses. These series of initiatives collectively drive Google Search&#8217;s ongoing evolution from a traditional list of results toward a dynamic, interactive intelligent assistant.</p>
<p></p>
<p>Roger Luo said:<span style="color: rgb(15, 17, 21); font-family: quote-cjk-patch, Inter, system-ui, -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen, Ubuntu, Cantarell, &quot;Open Sans&quot;, &quot;Helvetica Neue&quot;, sans-serif; font-size: 14px;">This update marks a pivotal shift of search engines from information retrieval to conversational cognitive partners. By lowering interaction barriers, Google not only improves user experience but also strengthens its strategic position as a gateway in the competitive landscape of intelligent service ecosystems.</span></p>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/google-enables-seamless-transition-from-ai-overviews-to-ai-mode.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google announced that its cost-effective AI Plus plan is now fully available in global markets.</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/google-announced-that-its-cost-effective-ai-plus-plan-is-now-fully-available-in-global-markets.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/google-announced-that-its-cost-effective-ai-plus-plan-is-now-fully-available-in-global-markets.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 28 Jan 2026 16:28:37 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[markets]]></category>
		<category><![CDATA[plan]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/google-announced-that-its-cost-effective-ai-plus-plan-is-now-fully-available-in-global-markets.html</guid>

					<description><![CDATA[The plan covers 35 newly added countries and regions, having been gradually rolled out to...]]></description>
										<content:encoded><![CDATA[<p>The plan covers 35 newly added countries and regions, having been gradually rolled out to dozens of markets since its initial launch in Indonesia last September.</p>
<p>The core features of the plan include access to the Gemini 3 Pro and Nano Pro models within the Gemini app, AI video creation through Veo, research and writing assistance via NotebookLM, 200GB of storage, and the ability to share benefits with up to five family members. Existing Google One Premium (2TB) users will be automatically upgraded to receive all these benefits in the coming days.</p>
<p style="text-align: center;">
                <a href="" target="_self" title="GettyImages"><br />
                <img loading="lazy" decoding="async" class="wp-image-48 size-full" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/ef13fe68133dfd6e60fff3831d83107a.webp" alt="" width="380" height="250"></a></p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (GettyImages)</em></span></p>
<p><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/ef13fe68133dfd6e60fff3831d83107a.webp" data-filename="filename" style="width: 471.771px;"></p>
<p>Positioned as the first upgrade option after the free tier, the plan primarily targets users who do not need or cannot afford the high-end Pro version priced at $20 per month. Its tiered pricing strategy (e.g., approximately $4.44 per month in India) directly competes with OpenAI’s ChatGPT Go plan. It aims to attract users in emerging markets and casual users with an accessible price point, fostering long-term usage habits and accelerating the adoption of AI technology and enterprise user penetration.</p>
<p>Roger Luo said:Google lowers the threshold for AI usage through a differentiated pricing strategy, filling the gap between the free and high-end markets with mid-range packages. This move not only directly benchmarks competitors, but also focuses on cultivating user habits in emerging markets, laying the foundation for long-term ecological layout.</p>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/google-announced-that-its-cost-effective-ai-plus-plan-is-now-fully-available-in-global-markets.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>YouTube creator sues Snap accusing its AI model training of copyright infringement</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/youtube-creator-sues-snap-accusing-its-ai-model-training-of-copyright-infringement.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/youtube-creator-sues-snap-accusing-its-ai-model-training-of-copyright-infringement.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 27 Jan 2026 08:27:23 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[snap]]></category>
		<category><![CDATA[youtube]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/youtube-creator-sues-snap-accusing-its-ai-model-training-of-copyright-infringement.html</guid>

					<description><![CDATA[A group of YouTube creators are suing multiple tech giants for illegally capturing their videos...]]></description>
										<content:encoded><![CDATA[<p>A group of YouTube creators are suing multiple tech giants for illegally capturing their videos to train AI models, and Snap has recently been added to the list of defendants. These three plaintiffs, who collectively have approximately 6.2 million subscribers, accuse Snap of using its video content to train an AI system for in app AI features such as &#8220;Imagine Lens,&#8221; which allows users to edit images through text commands.</p>
<p style="text-align: center;">
                <a href="" target="_self" title="evan spiegel"><br />
                <img loading="lazy" decoding="async" class="wp-image-48 size-full" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/9dce6b3e3edc8602ef713e7de2d6a249.webp" alt="" width="380" height="250"></a></p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (evan spiegel)</em></span></p>
<p><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/9dce6b3e3edc8602ef713e7de2d6a249.webp" data-filename="filename" style="width: 471.771px;"></p>
<p>Previously, the plaintiff had filed a lawsuit against Nvidia, Meta and ByteDance for similar reasons.</p>
<p>The latest proposed class action lawsuit was submitted to the United States District Court for the Central District of California last Friday. The plaintiff specifically pointed out that Snap used a large-scale video language dataset called HD-VILA-100M and other datasets limited to academic research purposes. The plaintiff claims that in order to use the dataset for commercial purposes, Snap circumvented YouTube&#8217;s technical restrictions, terms of service, and license provisions prohibiting commercial use.</p>
<p>The lawsuit demands statutory compensation and applies for a permanent injunction to prevent potential infringement in the future.</p>
<p>This case is mainly led by the creators of the h3h3 YouTube channel with a subscription volume of 5.52 million, as well as the smaller golf channels MrShortGame Golf and Golfholics.</p>
<p>This is the latest case among numerous content creators suing AI model suppliers. Previously, there have been copyright disputes from publishers, writers, newspapers, user generated content platforms, artists, and other parties. This is not the first lawsuit initiated by YouTube creators. According to data from the non-profit organization Copyright Alliance, there have been over 70 copyright infringement cases against AI companies.</p>
<p>The progress of such lawsuits varies: in the case of Meta and Writers Group, the judge ruled in favor of tech giants; In the case between Anthropic and the author group, the AI giant chose to settle with the plaintiff and pay compensation. Currently, the majority of cases are still under active trial.</p>
<p>Roger Luo said：<span style="color: rgb(15, 17, 21); font-family: quote-cjk-patch, Inter, system-ui, -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen, Ubuntu, Cantarell, &quot;Open Sans&quot;, &quot;Helvetica Neue&quot;, sans-serif; font-size: 14px;">This case centers on whether the commercial use of &#8220;research-only&#8221; datasets for AI training constitutes a substantive violation of both original content copyrights and platform terms of service. It touches on the universal legal challenge in the age of generative AI: defining the boundaries of data ownership and fair use in training materials.</span></p>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/youtube-creator-sues-snap-accusing-its-ai-model-training-of-copyright-infringement.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Anthropic CEO&#8217;s Davos speech caused shock, publicly criticizing Nvidia</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/anthropic-ceos-davos-speech-caused-shock-publicly-criticizing-nvidia.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/anthropic-ceos-davos-speech-caused-shock-publicly-criticizing-nvidia.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 22 Jan 2026 08:25:22 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[nvidia]]></category>
		<category><![CDATA[security]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/anthropic-ceos-davos-speech-caused-shock-publicly-criticizing-nvidia.html</guid>

					<description><![CDATA[The US government recently officially approved Nvidia and AMD to export high-performance AI chips to...]]></description>
										<content:encoded><![CDATA[<p>The US government recently officially approved Nvidia and AMD to export high-performance AI chips to some Chinese customers, including the Nvidia H200 series. This policy shift occurred after the authorities re evaluated the ban on Chinese chips, which has attracted high attention from the industry.</p>
<p></p>
<p style="text-align: center;">
                <a href="" target="_self" title="Benjamin Girette"><br />
                <img loading="lazy" decoding="async" class="wp-image-48 size-full" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/74b1beb1288b0e95db4485bb5089c941.webp" alt="" width="380" height="250"></a></p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Benjamin Girette)</em></span></p>
<p>At the World Economic Forum in Davos, Dario Amodai, CEO of the artificial intelligence company Anthropic, strongly criticized this, likening the chip export policy to &#8220;selling nuclear weapons to North Korea&#8221;. It is worth noting that Anthropic is not only an important technology partner of NVIDIA, but also a strategic investment target that the latter has promised to invest billions of dollars in. Amodai warns that the United States&#8217; leading advantage in chip manufacturing may be weakened by these exports.</p>
<p><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/74b1beb1288b0e95db4485bb5089c941.webp" data-filename="filename" style="width: 471.771px;"></p>
<p></p>
<p>We have been leading China in chip manufacturing capabilities for many years, and exporting these high-performance AI chips would be a strategic mistake. ”Amodai stated on the forum site. He further emphasized that artificial intelligence technology has profound national security implications, and in the future, AI systems may become the &#8220;genius kingdom in data centers&#8221;.</p>
<p></p>
<p>This round of controversy highlights the emerging technological competition in the field of artificial intelligence. Although business cooperation and investment relationships still exist, industry leaders&#8217; positions on national security and technological leadership issues have become increasingly clear. Analysts point out that this reflects that in the context of the intensifying global AI competition, corporate decision-making is gradually moving beyond traditional business considerations and shifting towards a more macro strategic security dimension.</p>
<p></p>
<p>Roger Luo said:This controversy highlights the profound contradiction in the global AI competition: while companies pursue commercial interests and technological leadership, they have to face security challenges brought about by technological diffusion.</p>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/anthropic-ceos-davos-speech-caused-shock-publicly-criticizing-nvidia.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Musk announces that Tesla Dojo3 chip will be dedicated to &#8216;space AI computing&#8217;</title>
		<link>https://www.proteine-bio.com/chemicalsmaterials/musk-announces-that-tesla-dojo3-chip-will-be-dedicated-to-space-ai-computing.html</link>
					<comments>https://www.proteine-bio.com/chemicalsmaterials/musk-announces-that-tesla-dojo3-chip-will-be-dedicated-to-space-ai-computing.html#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 07:55:37 +0000</pubDate>
				<category><![CDATA[Chemicals&Materials]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[chip]]></category>
		<category><![CDATA[tesla]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/musk-announces-that-tesla-dojo3-chip-will-be-dedicated-to-space-ai-computing.html</guid>

					<description><![CDATA[Elon Musk recently announced that Tesla plans to restart its previously stalled third-generation AI chip...]]></description>
										<content:encoded><![CDATA[<p>Elon Musk recently announced that Tesla plans to restart its previously stalled third-generation AI chip project, Dojo3. Unlike before, the goal of this chip will no longer be focused on training ground autonomous driving models, but will shift towards the field of &#8220;space AI computing&#8221;.</p>
<p style="text-align: center;">
                <a href="" target="_self" title="Tesla's phone"><br />
                <img loading="lazy" decoding="async" class="wp-image-48 size-full" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/2c89407f837536be6472466341942126.webp" alt="" width="380" height="250"></a></p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (Tesla&#8217;s phone)</em></span></p>
<p><img decoding="async" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/2c89407f837536be6472466341942126.webp" data-filename="filename" style="width: 471.771px;"></p>
<p>This move comes just five months after Tesla suspended the Dojo project. Previously, after the departure of project leader Peter Bannon, Tesla disbanded the team responsible for the Dojo supercomputer. About 20 former team members subsequently joined DensityAI, an emerging AI infrastructure company co founded by former Dojo leader Gannis Venkataraman and former Tesla employees Bill Zhang and Ben Florin.</p>
<p></p>
<p>When the Dojo project was suspended, there were reports that Tesla planned to reduce its investment in self-developed chips and instead increase its reliance on computing resources from partners such as Nvidia and AMD, and chose Samsung to be responsible for chip manufacturing. Musk&#8217;s latest statement indicates that the company&#8217;s strategy may be adjusted again.</p>
<p></p>
<p>The AI5 chip currently used by Tesla is produced by TSMC and is mainly used to support autonomous driving functions and Optimus humanoid robots. Last summer, Tesla signed a $16.5 billion agreement with Samsung to produce the next generation AI6 chip, which will serve high-performance AI training in Tesla vehicles, Optimus robots, and data centers.</p>
<p></p>
<p>AI7/Dojo3 will focus on space AI computing, &#8220;Musk said on Sunday, meaning that the restarted project will be given a more cutting-edge positioning. To achieve this goal, Tesla is working on rebuilding the team that disbanded several months ago. Musk directly issued a talent recruitment invitation on the same occasion: &#8220;If you are interested in participating in the construction of the world&#8217;s most widely used chip, please feel free to send an email to AI_Chips@Tesla.com That&#8217;s right.</p>
<p></p>
<p>Roger Luo stated:Tesla&#8217;s restart of the Dojo3 towards space computing demonstrates its continuous exploration and rapid adjustment capabilities in AI chip strategy. This is not only a significant shift in its technological roadmap, but also reflects its early layout for future high frontier AI computing scenarios.</p>
<p>
        All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete. </p>
<p><b>Inquiry us</b> [contact-form-7]</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.proteine-bio.com/chemicalsmaterials/musk-announces-that-tesla-dojo3-chip-will-be-dedicated-to-space-ai-computing.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>TikTok&#8217;s Latest Innovation: AI-Powered Pet Recognition</title>
		<link>https://www.proteine-bio.com/biology/tiktoks-latest-innovation-ai-powered-pet-recognition.html</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 21 Jan 2026 04:38:46 +0000</pubDate>
				<category><![CDATA[Biology]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[tiktok]]></category>
		<category><![CDATA[videos]]></category>
		<guid isPermaLink="false">https://www.proteine-bio.com/biology/tiktoks-latest-innovation-ai-powered-pet-recognition.html</guid>

					<description><![CDATA[TikTok Launches AI Tool to Recognize Pets in Videos (TikTok&#8217;s Latest Innovation: AI-Powered Pet Recognition)...]]></description>
										<content:encoded><![CDATA[<p>TikTok Launches AI Tool to Recognize Pets in Videos </p>
<p style="text-align: center;">
                <a href="" target="_self" title="TikTok's Latest Innovation: AI-Powered Pet Recognition"><br />
                <img loading="lazy" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/4dc774048fd3ae5dc1cf205ef64fe4ca.png" alt="TikTok's Latest Innovation: AI-Powered Pet Recognition " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (TikTok&#8217;s Latest Innovation: AI-Powered Pet Recognition)</em></span>
                </p>
<p>TikTok announced a new feature powered by artificial intelligence. This tool automatically identifies pets in user videos. The company revealed this development today. The goal is to make pet-related content easier to find on the platform. Many users enjoy creating videos featuring their cats, dogs, and other animals. Finding these specific videos can sometimes be difficult. TikTok wants to solve this problem.</p>
<p>The new technology uses AI to scan videos uploaded to the app. It specifically looks for common household pets. Once a pet is detected, the system can suggest relevant hashtags. Creators can then add these hashtags to their posts. This makes the videos more discoverable to others interested in similar content. Viewers searching for pet videos should also find them easier. The AI focuses on recognizing different animals accurately.</p>
<p>TikTok stated this feature aims to improve the user experience. It helps creators connect with audiences who love animal content. The platform sees significant engagement with pet videos already. This tool builds on that existing popularity. It leverages advanced machine learning models developed by TikTok&#8217;s engineers. These models were trained on millions of pet images and videos. This training helps the AI distinguish between different types of animals effectively.</p>
<p style="text-align: center;">
                <a href="" target="_self" title="TikTok's Latest Innovation: AI-Powered Pet Recognition"><br />
                <img loading="lazy" decoding="async" class="size-medium wp-image-5057 aligncenter" src="https://www.proteine-bio.com/wp-content/uploads/2026/01/dd08e5223c3c6278f67f87557d183ed2.jpg" alt="TikTok's Latest Innovation: AI-Powered Pet Recognition " width="380" height="250"><br />
                </a>
                </p>
<p style="text-wrap: wrap; text-align: center;"><span style="font-size: 12px;"><em> (TikTok&#8217;s Latest Innovation: AI-Powered Pet Recognition)</em></span>
                </p>
<p>                 The company believes recognizing pets automatically saves creators time. Users no longer need to manually tag every video featuring their animal. The AI handles this task instantly. It analyzes the visual elements within the video frame by frame. This process happens quickly after the video is uploaded. The feature is rolling out gradually to users worldwide starting this month. It will be available in the app&#8217;s latest update. TikTok plans to monitor user feedback and refine the technology.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
