INTRODUCTION

We propose the creation of a Data Processing Center with a total capacity of at least 2000 PFlops. Using the IRONBYTE architecture for distributed launch and management of AI computing tasks, this data center focuses on ensuring high availability of the core system, responsible for all tasks and  data storage orchestration.

Data Center Scaling Task

The data center can expand indefinitely by adding the same type of nodes. When exceeding 5,000 nodes, the number of master nodes must be increased for effective task orchestration. Adding next-generation compute nodes will not require changes to the existing architecture or software frameworks.

Tasks That Can Be Solved

Learning

LLM (Large Language Models) training, fine-tuning, ML (Machine Learning)

Storage

Storage of models, datasets, AI software libraries

Scaling

Running models and forming a pipeline of model inferencing and scaling

Irregular Problems

Tasks can be combined on an IRONBYTE RIG if resources are available. During planned modernization, tasks can be designated for execution on nodes with new architecture accelerators. Legacy code support for inferencing models that remain operational for extended periods is also possible.