09:00 uur 14-05-2024

KAYTUS Released MotusAI, an AI Development Platform for Highly Efficient GPU Resource Scheduling and Task Orchestration

MotusAI Achieves an Average Cluster Computing Power Utilization Rate of Over 70% by Implementing Efficient and Unified GPU Scheduling

SINGAPORE & HAMBURG, Germany–(BUSINESS WIRE)– KAYTUS, a leading IT infrastructure provider, has unveiled MotusAI, an AI development platform now accessible for trial worldwide. MotusAI is tailored for deep learning and AI development, integrating GPU and data resources alongside AI development environments to streamline computing resource allocation, task orchestration, and centralized management. It accelerates training data and manages AI model development workflows seamlessly. This platform drastically reduces resource investment, boosts development efficiency, elevates cluster computing power utilization to over 70%, and significantly enhance in large-scale training task scheduling performance.

Streamline AI Development for Cost-Effectiveness and Efficiency

The rapid expansion of enterprise AI business and AI model development brings forth challenges including low computing efficiency, complexity in model development, varied requirements for task orchestration across different scenarios, and unstable computing resources. Ensuring efficient, flexible, and stable operation of AI business is critical for enterprises to consistently derive business insights, generate revenue, and maintain competitiveness.

Optimize Resource Management for Maximum Computing Power

MotusAI efficiently allocates resources and workloads by implementing intelligent and flexible GPU scheduling. It caters to diverse AI workload demands for computing power by dynamically allocating GPU resources based on demand. With multi-dimensional and dynamic GPU resource allocation, including fine-grained GPU scheduling and support for Multi-Instance GPU (MIG), MotusAI effectively meets computing power requirements across various scenarios such as model development, debugging, and training.

Streamline Task Orchestration for Versatile Support of Various Scenario

MotusAI has revolutionized cloud-native scheduling systems. Its scheduler surpasses the community version by dramatically improving the scheduling performance of large-scale POD tasks. MotusAI achieves rapid startup and environment readiness for hundreds of PODs, boasting a five times increase in throughput and a five times decrease in latency compared to the community scheduler. This ensures efficient scheduling and utilization of computing resources for large-scale training. Moreover, MotusAI enables dynamic scaling of AI workloads for both training and inference services, supporting burst tasks and fulfilling diverse scheduling needs across various scenarios.

MotusAI empowers users to maximize computing resources, spanning from fine-grained division of single-card multiple instances to large-scale parallel computing across multiple machines and cards. By integrating features like computing power pooling, dynamic scaling, and GPU single-card reuse, MotusAI significantly enhances computing power utilization, achieving an average utilization rate of over 70%. Furthermore, it enhances computing efficiency by leveraging cluster topology awareness and optimizing network communication.

Data Transfer Acceleration for Three Times Efficiency Improvement

MotusAI excels in data transfer acceleration through innovative features such as supporting local loading and computing of remote data, which eliminates delays caused by network I/O during computation. Utilizing strategies like “zero-copy” data transfer, multi-threaded retrieval, incremental data updates, and affinity scheduling, MotusAI significantly reduces data caching cycles. These enhancements greatly improve AI development and training efficiency, resulting in 2-3 times boost in model efficiency during data training.

Reliable, and Automatically Fault-Tolerant Platform

MotusAI supports performance monitoring and alerts for computing resources, providing real-time status updates for core platform services. It employs sandbox isolation mechanisms for data with higher security levels. In case of resource failures or abnormalities, MotusAI automatically initiates fault tolerance processes to ensure the quickest possible recovery during interrupted training tasks. This approach reduces fault handling time by over 90%, on average.

Comprehensive Management of AI Model Development in One Integrated Solution

MotusAI accelerates AI development and supports every stage of large model development. From managing data samples and software stacks to designing model architectures, debugging code, training models, tuning parameters, and conducting evaluation testing, MotusAI offers a complete platform. It integrates popular development frameworks like PyTorch and TensorFlow, along with distributed training frameworks such as Megatron and DeepSpeed.

Moreover, MotusAI enables comprehensive lifecycle management of AI inferencing services, including offline and online testing, A/B testing, rolling release, service orchestration, and service decommissioning. These features collectively enhance the business value of AI technology, fostering continuous business growth.

Additionally, MotusAI provides an integrated visual management and operation interface that covers computing, networking, storage, and application resources. Operational staff can comprehensively manage, monitor, and evaluate the overall platform operation status through a single interface.

Free Trial Available

MotusAI is now available worldwide for a trial period, offering free remote access for one month, along with testing, training, and support. Users can also opt for local deployment using their own devices and environment, with local deployment testing support from KAYTUS. For more information1 and to register2, please visit Link1, Link2.

About KAYTUS

KAYTUS is a premier provider of IT infrastructure products and solutions, delivering a suite of cutting-edge, open, and environmentally friendly infrastructure solutions for cloud, AI, edge computing, and other emerging technologies. With a customer-centric approach, KAYTUS adapts flexibly to user needs through its agile business model. Learn more at KAYTUS.com

Contacts

media@kaytus.com

Check out our twitter: @NewsNovumpr