Robotics | Robotics Developments AI News | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:04 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Robotics | Robotics Developments AI News | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ 32 32 Meta FAIR advances human-like AI with five major releases https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/ https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/#respond Thu, 17 Apr 2025 16:00:05 +0000 https://www.artificialintelligence-news.com/?p=105371 The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI). The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents. Meta […]

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI).

The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents.

Meta stated its goal involves creating machines “that are able to acquire, process, and interpret sensory information about the world around us and are able to use this information to make decisions with human-like intelligence and speed.”

The five new releases represent diverse but interconnected efforts towards achieving this ambitious goal.

Perception Encoder: Meta sharpens the ‘vision’ of AI

Central to the new releases is the Perception Encoder, described as a large-scale vision encoder designed to excel across various image and video tasks.

Vision encoders function as the “eyes” for AI systems, allowing them to understand visual data.

Meta highlights the increasing challenge of building encoders that meet the demands of advanced AI, requiring capabilities that bridge vision and language, handle both images and videos effectively, and remain robust under challenging conditions, including potential adversarial attacks.

The ideal encoder, according to Meta, should recognise a wide array of concepts while distinguishing subtle details—citing examples like spotting “a stingray burrowed under the sea floor, identifying a tiny goldfinch in the background of an image, or catching a scampering agouti on a night vision wildlife camera.”

Meta claims the Perception Encoder achieves “exceptional performance on image and video zero-shot classification and retrieval, surpassing all existing open source and proprietary models for such tasks.”

Furthermore, its perceptual strengths reportedly translate well to language tasks. 

When aligned with a large language model (LLM), the encoder is said to outperform other vision encoders in areas like visual question answering (VQA), captioning, document understanding, and grounding (linking text to specific image regions). It also reportedly boosts performance on tasks traditionally difficult for LLMs, such as understanding spatial relationships (e.g., “if one object is behind another”) or camera movement relative to an object.

“As Perception Encoder begins to be integrated into new applications, we’re excited to see how its advanced vision capabilities will enable even more capable AI systems,” Meta said.

Perception Language Model (PLM): Open research in vision-language

Complementing the encoder is the Perception Language Model (PLM), an open and reproducible vision-language model aimed at complex visual recognition tasks. 

PLM was trained using large-scale synthetic data combined with open vision-language datasets, explicitly without distilling knowledge from external proprietary models.

Recognising gaps in existing video understanding data, the FAIR team collected 2.5 million new, human-labelled samples focused on fine-grained video question answering and spatio-temporal captioning. Meta claims this forms the “largest dataset of its kind to date.”

PLM is offered in 1, 3, and 8 billion parameter versions, catering to academic research needs requiring transparency.

Alongside the models, Meta is releasing PLM-VideoBench, a new benchmark specifically designed to test capabilities often missed by existing benchmarks, namely “fine-grained activity understanding and spatiotemporally grounded reasoning.”

Meta hopes the combination of open models, the large dataset, and the challenging benchmark will empower the open-source community.

Meta Locate 3D: Giving robots situational awareness

Bridging the gap between language commands and physical action is Meta Locate 3D. This end-to-end model aims to allow robots to accurately localise objects in a 3D environment based on open-vocabulary natural language queries.

Meta Locate 3D processes 3D point clouds directly from RGB-D sensors (like those found on some robots or depth-sensing cameras). Given a textual prompt, such as “flower vase near TV console,” the system considers spatial relationships and context to pinpoint the correct object instance, distinguishing it from, say, a “vase on the table.”

The system comprises three main parts: a preprocessing step converting 2D features to 3D featurised point clouds; the 3D-JEPA encoder (a pretrained model creating a contextualised 3D world representation); and the Locate 3D decoder, which takes the 3D representation and the language query to output bounding boxes and masks for the specified objects.

Alongside the model, Meta is releasing a substantial new dataset for object localisation based on referring expressions. It includes 130,000 language annotations across 1,346 scenes from the ARKitScenes, ScanNet, and ScanNet++ datasets, effectively doubling existing annotated data in this area.

Meta sees this technology as crucial for developing more capable robotic systems, including its own PARTNR robot project, enabling more natural human-robot interaction and collaboration.

Dynamic Byte Latent Transformer: Efficient and robust language modelling

Following research published in late 2024, Meta is now releasing the model weights for its 8-billion parameter Dynamic Byte Latent Transformer.

This architecture represents a shift away from traditional tokenisation-based language models, operating instead at the byte level. Meta claims this approach achieves comparable performance at scale while offering significant improvements in inference efficiency and robustness.

Traditional LLMs break text into ‘tokens’, which can struggle with misspellings, novel words, or adversarial inputs. Byte-level models process raw bytes, potentially offering greater resilience.

Meta reports that the Dynamic Byte Latent Transformer “outperforms tokeniser-based models across various tasks, with an average robustness advantage of +7 points (on perturbed HellaSwag), and reaching as high as +55 points on tasks from the CUTE token-understanding benchmark.”

By releasing the weights alongside the previously shared codebase, Meta encourages the research community to explore this alternative approach to language modelling.

Collaborative Reasoner: Meta advances socially-intelligent AI agents

The final release, Collaborative Reasoner, tackles the complex challenge of creating AI agents that can effectively collaborate with humans or other AIs.

Meta notes that human collaboration often yields superior results, and aims to imbue AI with similar capabilities for tasks like helping with homework or job interview preparation.

Such collaboration requires not just problem-solving but also social skills like communication, empathy, providing feedback, and understanding others’ mental states (theory-of-mind), often unfolding over multiple conversational turns.

Current LLM training and evaluation methods often neglect these social and collaborative aspects. Furthermore, collecting relevant conversational data is expensive and difficult.

Collaborative Reasoner provides a framework to evaluate and enhance these skills. It includes goal-oriented tasks requiring multi-step reasoning achieved through conversation between two agents. The framework tests abilities like disagreeing constructively, persuading a partner, and reaching a shared best solution.

Meta’s evaluations revealed that current models struggle to consistently leverage collaboration for better outcomes. To address this, they propose a self-improvement technique using synthetic interaction data where an LLM agent collaborates with itself.

Generating this data at scale is enabled by a new high-performance model serving engine called Matrix. Using this approach on maths, scientific, and social reasoning tasks reportedly yielded improvements of up to 29.4% compared to the standard ‘chain-of-thought’ performance of a single LLM.

By open-sourcing the data generation and modelling pipeline, Meta aims to foster further research into creating truly “social agents that can partner with humans and other agents.”

These five releases collectively underscore Meta’s continued heavy investment in fundamental AI research, particularly focusing on building blocks for machines that can perceive, understand, and interact with the world in more human-like ways. 

See also: Meta will train AI models using EU user data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/feed/ 0
NVIDIA advances AI frontiers with CES 2025 announcements https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/ https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/#respond Tue, 07 Jan 2025 11:25:09 +0000 https://www.artificialintelligence-news.com/?p=16818 NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more. “AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then […]

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more.

“AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound. Now, we’re entering the era of ‘physical AI,’ AI that can perceive, reason, plan, and act.”

With NVIDIA’s platforms and GPUs at the core, Huang explained how the company continues to fuel breakthroughs across multiple industries while unveiling innovations such as the Cosmos platform, next-gen GeForce RTX 50 Series GPUs, and compact AI supercomputer Project DIGITS. 

RTX 50 series: “The GPU is a beast”

One of the most significant announcements during CES 2025 was the introduction of the GeForce RTX 50 Series, powered by NVIDIA Blackwell architecture. Huang debuted the flagship RTX 5090 GPU, boasting 92 billion transistors and achieving an impressive 3,352 trillion AI operations per second (TOPS).

“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” said Huang.

Holding the blacked-out GPU, Huang called it “a beast,” highlighting its advanced features, including dual cooling fans and its ability to leverage AI for revolutionary real-time graphics.

Set for a staggered release in early 2025, the RTX 50 Series includes the flagship RTX 5090 and RTX 5080 (available 30 January), followed by the RTX 5070 Ti and RTX 5070 (February). Laptop GPUs join the lineup in March.

In addition, NVIDIA introduced DLSS 4 – featuring ‘Multi-Frame Generation’ technology – which boosts gaming performance up to eightfold by generating three additional frames for every frame rendered.

Other advancements, such as RTX Neural Shaders and RTX Mega Geometry, promise heightened realism in video games, including precise face and hair rendering using generative AI.

Cosmos: Ushering in physical AI

NVIDIA took another step forward with the Cosmos platform at CES 2025, which Huang described as a “game-changer” for robotics, industrial AI, and AVs. Much like the impact of large language models on generative AI, Cosmos represents a new frontier for AI applications in robotics and autonomous systems.

“The ChatGPT moment for general robotics is just around the corner,” Huang declared.

Cosmos integrates generative models, tokenisers, and video processing frameworks to enable robots and vehicles to simulate potential outcomes and predict optimal actions. By ingesting text, image, and video prompts, Cosmos can generate “virtual world states,” tailored for complex robotics and AV use cases involving real-world environments and lighting.

Top robotics and automotive leaders – including XPENG, Hyundai Motor Group, and Uber – are among the first to adopt Cosmos, which is available on GitHub via an open licence.

Pras Velagapudi, CTO at Agility, comments: “Data scarcity and variability are key challenges to successful learning in robot environments. Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”

Empowering developers with AI models

NVIDIA also unveiled new AI foundation models for RTX PCs, which aim to supercharge content creation, productivity, and enterprise applications. These models, presented as NVIDIA NIM (Neural Interaction Model) microservices, are designed to integrate with the RTX 50 Series hardware.

Huang emphasised the accessibility of these tools: “These AI models run in every single cloud because NVIDIA GPUs are now available in every cloud.”

NVIDIA is doubling down on its push to equip developers with advanced tools for building AI-driven solutions. The company introduced AI Blueprints: pre-configured tools for crafting agents tailored to specific enterprise needs, such as content generation, fraud detection, and video management.

“They are completely open source, so you could take it and modify the blueprints,” explains Huang.

Huang also announced the release of Llama Nemotron, designed for developers to build and deploy powerful AI agents.

Ahmad Al-Dahle, VP and Head of GenAI at Meta, said: “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents.

“Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”

Philipp Herzig, Chief AI Officer at SAP, added: “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios.

“Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes.”

Safer and smarter autonomous vehicles

NVIDIA’s announcements extended to the automotive industry, where its DRIVE Hyperion AV platform is fostering a safer and smarter future for AVs. Built on the new NVIDIA AGX Thor system-on-a-chip (SoC), the platform allows vehicles to achieve next-level functional safety and autonomous capabilities using generative AI models.

“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test-drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”

Huang explained that synthetic data is critical for AV development, as it dramatically enhances real-world datasets. NVIDIA’s AI data factories – powered by Omniverse and Cosmos platforms – generate synthetic driving scenarios, increasing the effectiveness of training data exponentially.

Toyota, the world’s largest automaker, is committed to using NVIDIA DRIVE AGX Orin and the safety-certified NVIDIA DriveOS to develop its next-generation vehicles. Heavyweights such as JLR, Mercedes-Benz, and Volvo Cars have also adopted DRIVE Hyperion.

Project DIGITS: Compact AI supercomputer

Huang concluded his NVIDIA keynote at CES 2025 with a final “one more thing” announcement: Project DIGITS, NVIDIA’s smallest yet most powerful AI supercomputer, powered by the cutting-edge GB10 Grace Blackwell Superchip.

“This is NVIDIA’s latest AI supercomputer,” Huang declared, revealing its compact size, claiming it’s portable enough to “practically fit in a pocket.”

Project DIGITS enables developers and engineers to train and deploy AI models directly from their desks, providing the full power of NVIDIA’s AI stack in a compact form.

Image of Project DIGITS on a desk, a compact AI supercomputer by NVIDIA debuted at CES 2025.

Set to launch in May, Project DIGITS represents NVIDIA’s push to make AI supercomputing accessible to individuals as well as organisations.

Vision for tomorrow

Reflecting on NVIDIA’s journey since inventing the programmable GPU in 1999, Huang described the past 12 years of AI-driven change as transformative.

“Every single layer of the technology stack has been fundamentally transformed,” he said.

With advancements spanning gaming, AI-driven agents, robotics, and autonomous vehicles, Huang foresees an exciting future.

“All of the enabling technologies I’ve talked about today will lead to surprising breakthroughs in general robotics and AI over the coming years,” Huang concludes.

(Image Credit: NVIDIA)

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/feed/ 0
MIT breakthrough could transform robot training https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/ https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/#respond Mon, 28 Oct 2024 16:43:57 +0000 https://www.artificialintelligence-news.com/?p=16403 MIT researchers have developed a robot training method that reduces time and cost while improving adaptability to new tasks and environments. The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This […]

The post MIT breakthrough could transform robot training appeared first on AI News.

]]>
MIT researchers have developed a robot training method that reduces time and cost while improving adaptability to new tasks and environments.

The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This method marks a significant departure from traditional robot training, where engineers typically collect specific data for individual robots and tasks in controlled environments.

Lead researcher Lirui Wang – an electrical engineering and computer science graduate student at MIT – believes that while many cite insufficient training data as a key challenge in robotics, a bigger issue lies in the vast array of different domains, modalities, and robot hardware. Their work demonstrates how to effectively combine and utilise all these diverse elements.

The research team developed an architecture that unifies various data types, including camera images, language instructions, and depth maps. HPT utilises a transformer model, similar to those powering advanced language models, to process visual and proprioceptive inputs.

In practical tests, the system demonstrated remarkable results—outperforming traditional training methods by more than 20 per cent in both simulated and real-world scenarios. This improvement held true even when robots encountered tasks significantly different from their training data.

The researchers assembled an impressive dataset for pretraining, comprising 52 datasets with over 200,000 robot trajectories across four categories. This approach allows robots to learn from a wealth of experiences, including human demonstrations and simulations.

One of the system’s key innovations lies in its handling of proprioception (the robot’s awareness of its position and movement.) The team designed the architecture to place equal importance on proprioception and vision, enabling more sophisticated dexterous motions.

Looking ahead, the team aims to enhance HPT’s capabilities to process unlabelled data, similar to advanced language models. Their ultimate vision involves creating a universal robot brain that could be downloaded and used for any robot without additional training.

While acknowledging they are in the early stages, the team remains optimistic that scaling could lead to breakthrough developments in robotic policies, similar to the advances seen in large language models.

You can find a copy of the researchers’ paper here (PDF)

(Photo by Possessed Photography)

See also: Jailbreaking AI robots: Researchers sound alarm over security flaws

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT breakthrough could transform robot training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/feed/ 0
AI-powered underwater vehicle transforms offshore wind inspections https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/ https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/#respond Tue, 24 Sep 2024 09:32:46 +0000 https://www.artificialintelligence-news.com/?p=16150 Beam has deployed the world’s first AI-driven autonomous underwater vehicle for offshore wind farm inspections. The technology has already proved its mettle by inspecting jacket structures at Scotland’s largest offshore wind farm, Seagreen—a joint venture between SSE Renewables, TotalEnergies, and PTTEP. The AI-powered vehicle represents a significant leap forward in marine technology and underwater robotics. […]

The post AI-powered underwater vehicle transforms offshore wind inspections appeared first on AI News.

]]>
Beam has deployed the world’s first AI-driven autonomous underwater vehicle for offshore wind farm inspections. The technology has already proved its mettle by inspecting jacket structures at Scotland’s largest offshore wind farm, Seagreen—a joint venture between SSE Renewables, TotalEnergies, and PTTEP.

The AI-powered vehicle represents a significant leap forward in marine technology and underwater robotics. Capable of conducting complex underwater inspections without human intervention, it promises to dramatically enhance efficiency and slash costs associated with underwater surveys and inspections.

Traditionally, offshore wind site inspections have been manual, labour-intensive processes. Beam’s autonomous solution offers a radical departure from this approach, enabling data to be streamed directly back to shore. This shift allows offshore workers to concentrate on more intricate tasks while reducing inspection timelines by up to 50%, resulting in substantial operational cost savings.

Brian Allen, CEO of Beam, said: “We are very proud to have succeeded in deploying the world’s first autonomous underwater vehicle driven by AI. Automation can revolutionise how we carry out inspection and maintenance of offshore wind farms, helping to reduce both costs and timelines.”

Beyond improved efficiency, Beam’s technology elevates the quality of inspection data and facilitates the creation of 3D reconstructions of assets alongside visual data. This deployment marks a crucial step in Beam’s roadmap for autonomous technology, with plans to extend this AI-driven solution across its fleet of DP2 vessels, ROVs, and autonomous underwater vehicles (AUVs) throughout 2025 and 2026.

“Looking ahead to the future, the potential of this technology is huge for the industry, and success in these initial projects is vital for us to progress and realise this vision. This wouldn’t be possible without forward-thinking customers like SSE Renewables who are willing to go on the journey with us,” explained Allen.

The Seagreen wind farm, operational since October 2023, is the world’s deepest fixed-bottom offshore wind farm. Beam’s project at Seagreen has provided crucial insights into the potential of autonomous technology for large offshore wind superstructures. The data collected by the AI-driven vehicle will support ongoing operational reliability at the site, offering valuable information on areas such as marine growth and potential erosion at the foundations.

Matthew Henderson, Technical Asset Manager – Substructure and Asset Lifecycle at SSE Renewables, commented: “At SSE, we have a mantra that ‘if it’s not safe, we don’t do it.’ Beam’s technology demonstrates that autonomous inspections can reduce the personnel we need to send offshore for planned inspections, while speeding up planned works and collecting rich data-sets to inform asset integrity planning.

“As we move further offshore, and into deeper waters, the ability to collect high-quality inspection data in a low-risk manner is imperative to us delivering our Net Zero Acceleration Programme.”

As Beam prepares to roll out its AI-driven inspection technology across its fleet in 2025 and 2026, this deployment aligns with the company’s mission to revolutionise offshore wind operations by making them more efficient and cost-effective—further supporting the global energy transition.

The success of this AI-powered underwater vehicle at Seagreen wind farm not only demonstrates the potential of autonomous technology in offshore wind inspections but also sets a new standard for safety, efficiency, and data quality in the industry. Such innovations will play a crucial role in ensuring the sustainability and cost-effectiveness of offshore wind energy.

See also: Hugging Face is launching an open robotics project

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI-powered underwater vehicle transforms offshore wind inspections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/feed/ 0
Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/ https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/#respond Mon, 25 Mar 2024 10:40:00 +0000 https://www.artificialintelligence-news.com/?p=14604 Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund. Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors.  Stanhope AI was […]

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund.

Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors. 

Stanhope AI was founded as a spinout from University College London, supported by UCL Business, by three of the most eminent names in neuroscience and AI research – CEO Professor Rosalyn Moran (former Deputy Director of King’s Institute for Artificial Intelligence), Director Karl Friston, Professor at the UCL Queen Square Institute of Neurology and Technical Advisor Dr Biswa Sengupta (MD of AI and Cloud products at JP Morgan Chase). 

By using key neuroscience principles and applying them to AI and mathematics, Stanhope AI is at the forefront of the new generation of AI technology known as ‘agentic’ AI.  The team has built algorithms that, like the human brain, are always trying to guess what will happen next; learning from any discrepancies between predicted and actual events to continuously update their “internal models of the world.” Instead of training vast LLMs to make decisions based on seen data, Stanhope agentic AI’s models are in charge of their own learning. They autonomously decode their environments and rebuild and refine their “world models” using real-time data, continuously fed to them via onboard sensors.  

The rise of agentic AI

This approach, and Stanhope AI’s technology, are based on the neuroscience principle of Active Inference – the idea that our brains, in order to minimise free energy, are constantly making predictions about incoming sensory data around us. As this data changes, our brains adapt and update our predictions in response to rebuild and refine our world view. 

This is very different to the traditional machine learning methods used to train today’s AI systems such as LLMs. Today’s models can only operate within the realms of the training they are given, and can only make best-guess decisions based on the information they have. They can’t learn on the go. They require extreme amounts of processing power and energy to train and run, as well as vast amounts of seen data.  

By contrast, Stanhope AI’s Active Inference models are truly autonomous. They can constantly rebuild and refine their predictions. Uncertainty is minimised by default, which removes the risk of hallucinations about what the AI thinks is true, and this moves Stanhope’s unique models towards reasoning and human-like decision-making. What’s more, by drastically reducing the size and energy required to run the models and the machines, Stanhope AI’s models can operate on small devices such as drones and similar.  

“The most all-encompassing idea since natural selection”

Stanhope AI’s approach is possible because of its founding team’s extensive research into the neuroscience principles of Active Inference, as well as free energy. Director Indeed Professor Friston, a world-renowned neuroscientist at UCL whose work has been cited twice as many times as Albert Einstein, is the inventor of the Free Energy Theory Principle. 

Friston’s principle theory centres on how our brains minimise surprise and uncertainty. It explains that all living things are driven to minimise free energy, and thus the energy needed to predict and perceive the world. Such is its impact, the Free Energy Theory Principle has been described as the “most all-encompassing idea since the theory of natural selection.” Active Inference sits within this theory to explain the process our brains use in order to minimise this energy. This idea infuses Stanhope AI’s work, led by Professor Moran, a specialist in Active Inference and its application through AI; and Dr Biswa Sengupta, whose doctoral research was in dynamical systems, optimisation and energy efficiency from the University of Cambridge. 

Real-world application

In the immediate term, the technology is being tested with delivery drones and autonomous machines used by partners including Germany’s Federal Agency for Disruptive Innovation and the Royal Navy. In the long term, the technology holds huge promise in the realms of manufacturing, industrial robotics and embodied AI. The investment will be used to further the company’s development of its agentic AI models and the practical application of its research.  

Professor Rosalyn Moran, CEO and co-founder of Stanhope AI, said: “Our mission at Stanhope AI is to bridge the gap between neuroscience and artificial intelligence, creating a new generation of AI systems that can think, adapt, and decide like humans. We believe this technology will transform the capabilities of AI and robotics and make them more impactful in real-world scenarios. We trust the math and we’re delighted to have the backing of investors like UCL Technology Fund who deeply understand the science behind this technology and their support will be significant on our journey to revolutionise AI technology.”

David Grimm, partner UCL Technology Fund, said: “AI startups may be some of the hottest investments right now but few have the calibre and deep scientific and technical know-how as the Stanhope AI team. This is emblematic of their unique approach, combining neuroscience insights with advanced AI, which presents a groundbreaking opportunity to advance the field and address some of the most challenging problems in AI today. We can’t wait to see what this team achieves.” 

Marina Santilli, sasociate director UCL Business, added “The promise offered by Stanhope AI’s approach to Artificial Intelligence is hugely exciting, providing hope for powerful whilst energy-light models. UCLB is delighted to have been able to support the formation of a company built on the decades of fundamental research at UCL led by Professor Friston, developing the Free Energy Principle.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/feed/ 0
Hugging Face is launching an open robotics project https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/ https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/#respond Fri, 08 Mar 2024 17:37:22 +0000 https://www.artificialintelligence-news.com/?p=14519 Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project. The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X: In keeping with Hugging Face’s ethos of open […]

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project.

The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X:

In keeping with Hugging Face’s ethos of open source, Cadene stated the robot project would be “open-source, not as in Open AI” in reference to OpenAI’s legal battle with Cadene’s former boss, Elon Musk.

Cadene – who will be leading the robotics initiative – revealed that Hugging Face is hiring robotics engineers in Paris, France.

A job listing for an “Embodied Robotics Engineer” sheds light on the project’s goals, which include “designing, building, and maintaining open-source and low cost robotic systems that integrate AI technologies, specifically in deep learning and embodied AI.”

The role involves collaborating with ML engineers, researchers, and product teams to develop innovative robotics solutions that “push the boundaries of what’s possible in robotics and AI.” Key responsibilities range from building low-cost robots using off-the-shelf components and 3D-printed parts to integrating deep learning and embodied AI technologies into robotic systems.

Until now, Hugging Face has primarily focused on software offerings like its machine learning codebase and open-source chatbot. The robotics project marks a significant departure into the hardware realm as the startup aims to bring AI into the physical world through open and affordable robotic platforms.

(Photo by Possessed Photography on Unsplash)

See also: Google engineer stole AI tech for Chinese firms

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/feed/ 0
AUKUS trial advances AI for military operations  https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/ https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/#respond Mon, 05 Feb 2024 16:29:13 +0000 https://www.artificialintelligence-news.com/?p=14324 The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems.  The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the […]

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems. 

The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the AUKUS partnership formed last year between the three countries. It aimed to test robotic vehicles and sensors in situations involving electronic attacks, GPS disruption, and other threats to evaluate the resilience of autonomous systems expected to play a major role in future military operations.

Understanding how to ensure these AI systems can operate reliably in the face of modern electronic warfare and cyber threats will be critical before the technology can be more widely adopted.  

The TORVICE trial featured US and British autonomous vehicles carrying out reconnaissance missions while Australia units simulated battlefield electronic attacks on their systems. Analysis of the performance data will help strengthen protections and safeguards needed to prevent system failures or disruptions.

Guy Powell, Dstl’s technical authority for the trial, said: “The TORVICE trial aims to understand the capabilities of robotic and autonomous systems to operate in contested environments. We need to understand how robust these systems are when subject to attack.

“Robotic and autonomous systems are a transformational capability that we are introducing to armies across all three nations.” 

This builds on the first AUKUS autonomous systems trial held in April 2023 in the UK. It also represents a step forward following the AUKUS defense ministers’ December announcement that Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) would be integrated into the three countries’ military forces beginning in 2024.

Dstl military advisor Lt Col Russ Atherton says that successfully harnessing AI and autonomy promises to “be an absolute game-changer” that reduces the risk to soldiers. The technology could carry out key tasks like sensor operation and logistics over wider areas.

“The ability to deploy different payloads such as sensors and logistics across a larger battlespace will give commanders greater options than currently exist,” explained Lt Atherton.

By collaborating, the AUKUS allies aim to accelerate development in this crucial new area of warfare, improving interoperability between their forces, maximising their expertise, and strengthening deterrence in the Indo-Pacific region.

As AUKUS continues to deepen cooperation on cutting-edge military technologies, this collaborative effort will significantly enhance military capabilities while reducing risks for warfighters.

(Image Credit: Dstl)

See also: Experts from 30 nations will contribute to global AI safety report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/feed/ 0
AWS and NVIDIA expand partnership to advance generative AI https://www.artificialintelligence-news.com/news/aws-nvidia-expand-partnership-advance-generative-ai/ https://www.artificialintelligence-news.com/news/aws-nvidia-expand-partnership-advance-generative-ai/#respond Wed, 29 Nov 2023 14:30:14 +0000 https://www.artificialintelligence-news.com/?p=13962 Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations. The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI […]

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/aws-nvidia-expand-partnership-advance-generative-ai/feed/ 0
Open X-Embodiment dataset and RT-X model aim to revolutionise robotics https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/ https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/#respond Wed, 04 Oct 2023 14:36:10 +0000 https://www.artificialintelligence-news.com/?p=13674 In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics. Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past. Open X-Embodiment: The gateway to generalist robots […]

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics.

Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past.

Open X-Embodiment: The gateway to generalist robots

At the heart of this transformation lies the Open X-Embodiment dataset, a monumental effort pooling data from 22 distinct robot types.

With the contributions of over 20 research institutions, this dataset comprises over 500 skills, encompassing a staggering 150,000 tasks across more than a million episodes.

This treasure trove of diverse robotic demonstrations represents a significant leap towards training a universal robotic model capable of multifaceted tasks.

RT-1-X: A general-purpose robotics model

Accompanying this dataset is RT-1-X, a product of meticulous training on RT-1 – a real-world robotic control model – and RT-2, a vision-language-action model. This fusion resulted in RT-1-X, exhibiting exceptional skills transferability across various robot embodiments.

In rigorous testing across five research labs, RT-1-X outperformed its counterparts by an average of 50 percent.

The success of RT-1-X signifies a paradigm shift, demonstrating that training a single model with diverse, cross-embodiment data dramatically enhances its performance on various robots.

Emergent skills: Leaping into the future

The experimentation did not stop there. Researchers explored emergent skills, delving into uncharted territories of robotic capabilities.

RT-2-X, an advanced version of the vision-language-action model, exhibited remarkable spatial understanding and problem-solving abilities. By incorporating data from different robots, RT-2-X demonstrated an expanded repertoire of tasks, showcasing the potential of shared learning in the robotic realm.

A responsible approach

Crucially, this research emphasises a responsible approach to the advancement of robotics. 

By openly sharing data and models, the global community can collectively elevate the field—transcending individual limitations and fostering an environment of shared knowledge and progress.

The future of robotics lies in mutual learning, where robots teach each other, and researchers learn from one another. The momentous achievement unveiled this week paves the way for a future where robots seamlessly adapt to diverse tasks, heralding a new era of innovation and efficiency.

(Photo by Brett Jordan on Unsplash)

See also: Amazon invests $4B in Anthropic to boost AI capabilities

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/feed/ 0
UK commits £13M to cutting-edge AI healthcare research https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/ https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/#respond Thu, 10 Aug 2023 14:51:26 +0000 https://www.artificialintelligence-news.com/?p=13457 The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector. The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to […]

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector.

The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to drive innovation and transform patient care.

Dr Antonio Espingardeiro, IEEE member and software and robotics expert, comments:

“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans. The potential for the technology within the medical field is huge—it can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.

We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised. In the future, we may even be able to solve of some of the biggest challenges and issues of our time.

One of the standout projects receiving funding is the University College London’s Centre for Interventional and Surgical Sciences. With a grant exceeding £500,000, researchers aim to develop a semi-autonomous surgical robotics platform designed to enhance the removal of brain tumours. This pioneering technology promises to elevate surgical outcomes, minimise complications, and expedite patient recovery times.

“With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery,” added Espingardeiro.

University of Sheffield’s project, backed by £463,000, is focused on a crucial aspect of healthcare – chronic nerve pain. Their innovative approach aims to widen and improve treatments for this condition, which affects one in ten adults over 30.

The University of Oxford’s project, bolstered by £640,000, seeks to expedite research into a foundational AI model for clinical risk prediction. By analysing an individual’s existing health conditions, this AI model could accurately forecast the likelihood of future health problems and revolutionise early intervention strategies.

Meanwhile, Heriot-Watt University in Edinburgh has secured £644,000 to develop a groundbreaking system that offers real-time feedback to trainee surgeons practising laparoscopy procedures, also known as keyhole surgeries. This technology promises to enhance the proficiency of aspiring surgeons and elevate the overall quality of healthcare.

Finally, the University of Surrey’s project – backed by £456,000 – will collaborate closely with radiologists to develop AI capable of enhancing mammogram analysis. By streamlining and improving this critical diagnostic process, AI could contribute to earlier cancer detection.

Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre, said:

“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat, and monitor patients.

Applications of AI in healthcare include finding new links between genetic codes, performing robot-assisted surgeries, improving medical imaging methods, automating administrative tasks, personalising treatment options, producing more accurate diagnoses and treatment plans, enhancing preventive care and quality of life, predicting and tracking the spread of infectious diseases, and helping combat epidemics and pandemics.”

With the UK healthcare sector already witnessing AI applications in improving stroke diagnosis, heart attack risk assessment, and more, the £13 million investment is poised to further accelerate transformative healthcare breakthroughs.

Health and Social Care Secretary Steve Barclay commented:

“AI can help the NHS improve outcomes for patients, with breakthroughs leading to earlier diagnosis, more effective treatments, and faster recovery. It’s already being used in the NHS in a number of areas, from improving diagnosis and treatment for stroke patients to identifying those most at risk of a heart attack.

This funding is yet another boost to help the UK lead the way in healthcare research. It comes on top of the £21 million we recently announced for trusts to roll out the latest AI diagnostic tools and £123 million invested in 86 promising tech through our AI in Health and Care Awards.”

However, the announcement was made the same week as NHS waiting lists hit a record high. Prime Minister Rishi Sunak made reducing waiting lists one of his five key priorities for 2023 on which to hold him “to account directly for whether it is delivered.” Hope is being pinned on technologies like AI to help tackle waiting lists.

This pivotal move is accompanied by the nation’s preparations to host the world’s first major international summit on AI safety, underscoring its commitment to responsible AI development.

Scheduled for later this year, the AI safety summit will provide a platform for international stakeholders to collaboratively address AI’s risks and opportunities.

As Europe’s AI leader, and the third-ranking globally behind the USA and China, the UK is well-positioned to lead these discussions and champion the responsible advancement of AI technology.

(Photo by National Cancer Institute on Unsplash)

See also: BSI publishes guidance to boost trust in AI for healthcare

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/feed/ 0
SK Telecom outlines its plans with AI partners https://www.artificialintelligence-news.com/news/sk-telecom-outlines-its-plans-with-ai-partners/ https://www.artificialintelligence-news.com/news/sk-telecom-outlines-its-plans-with-ai-partners/#respond Tue, 20 Jun 2023 16:37:32 +0000 https://www.artificialintelligence-news.com/?p=13203 SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem.  The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI […]

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem. 

The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI industry.

SKT has been actively promoting AI transformation through strategic partnerships and collaborations with various AI companies. During MWC 2023, the company announced partnerships with seven AI companies: SAPEON, Bespin Global, Moloco, Konan Technology, Swit, Phantom AI, and Tuat.

During the meeting, SKT’s CEO Ryu Young-sang outlined the company’s AI vision and discussed its business plans with the AI partners. The executives from SKT and its AI partners engaged in in-depth discussions on major global AI trends, the latest technological achievements, ongoing R&D projects, and global business and investment opportunities.

One of the notable discussions took place between SKT and CMES, an AI-powered robotics company.

SKT and CMES exchanged views and ideas on the development of pricing plans for “Robot as a Service (RaaS)” and subscription-based business models for AI-driven RaaS tailored for enterprises.

RaaS is gaining attention as a cost-effective alternative to additional manpower or infrastructure investment for automation. The demand for RaaS is expected to grow rapidly in sectors such as logistics, delivery, construction, and healthcare.

Furthermore, SKT aims to collaborate with Scatter Lab, a renowned AI startup known for its Lee Lu-da chatbot. The company plans to integrate an emotional AI agent into its AI service, ‘A.’

Additionally, SKT discussed strategies for synergy creation with MakinaRocks, a startup specialising in industrial AI solutions, and FriendliAI, a startup that provides a platform for developing generative AI models. By joining forces, the companies aim to establish a leading position in the global AI market.

Ryu Young-sang, CEO of SKT, commented:

“Now with our AI partners on board, we have completed the blueprint for driving new growth in the global market.

We will work together to develop diverse cooperation opportunities in AI, and bring our AI technologies and services to the global market.”

By harnessing the expertise and technologies of its AI partners, SKT is well-positioned to lead the global AI ecosystem and deliver innovative AI solutions to the market.

(Photo by Brett Jordan on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/sk-telecom-outlines-its-plans-with-ai-partners/feed/ 0
Tesla’s AI supercomputer tripped the power grid https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/ https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/#respond Mon, 03 Oct 2022 09:40:05 +0000 https://www.artificialintelligence-news.com/?p=12337 Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid. Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year. The supercomputer has transitioned […]

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid.

Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year.

The supercomputer has transitioned from just a chip and training tiles into a full cabinet. Tesla claims that it can replace six GPU boxes with a single Dojo tile, which it says is cheaper than one GPU box.

Per tray, there are six Dojo tiles. Tesla claims that each tray is equivalent to “three to four full-loaded supercomputer racks”. Two trays can fit in a single Dojo cabinet with a host assembly.

Such a supercomputer naturally has a large power draw. Dojo requires so much power that it managed to trip the grid in Palo Alto.

“Earlier this year, we started load testing our power and cooling infrastructure. We were able to push it over 2 MW before we tripped our substation and got a call from the city,” said Bill Chang, Tesla’s Principal System Engineer for Dojo.

In order to function, Tesla had to build custom infrastructure for Dojo with its own high-powered cooling and power system.

An ‘ExaPOD’ (consisting of a few Dojo cabinets) has the following specs:

  • 1.1 EFLOP
  • 1.3TB SRAM
  • 13TB DRAM

Seven ExaPODs are currently planned to be housed in Palo Alto.

Dojo is purpose-built for AI and will greatly improve Tesla’s ability to train neural nets using video data from its vehicles. These neural nets will be critical for Tesla’s self-driving efforts and its humanoid robot ‘Optimus’, which also made an appearance during this year’s event.

Optimus

Optimus was also first unveiled last year and was even more in its infancy than Dojo. In fact, all it was at the time was a person in a spandex suit and some PowerPoint slides.

While it’s clear that Optimus still has a long way to go before it can do the shopping and carry out dangerous manual labour tasks, as Tesla envisions, we at least saw a working prototype of the robot at AI Day 2022.

“I do want to set some expectations with respect to our Optimus robot,” said Tesla CEO Elon Musk. “As you know, last year it was just a person in a robot suit. But, we’ve come a long way, and compared to that it’s going to be very impressive.”

Optimus can now walk around and, if attached to apparatus from the ceiling, do some basic tasks like watering plants:

The prototype of Optimus was reportedly developed in the past six months and Tesla is hoping to get a working design within the “next few months… or years”. The price tag is “probably less than $20,000”.

All the details of Optimus are still vague at the moment, but at least there’s more certainty around the Dojo supercomputer.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/feed/ 0