About AIME
Artificial Intelligence Machine
Powering AI with Expertise, Performance and Reliability
Our core competence is the development, assembly and provision of highly specialized HPC servers and workstations for the development and deployment of deep learning models and artificial intelligence solutions.
We assists our customers to find and implement their preferred high performance compute solutions either as on-site, colocation, rented or as dedicated API solution.
AIME was founded in 2019 by Toine Diepstraten (graduate computer scientist, specialized on artificial intelligence & distributed systems) and Henri Hagenow (graduate physicist, specialized on digitalization of business processes and user experience) to develop their own AI solutions and quickly realized the high demand in cost efficient, scalable and realiable compute power.
The AIME GmbH is today a mid-tier system integrator company specialized in HPC and deep learning systems based in Berlin, Germany.
Our Customers & Partners
Our customer base includes renowned mid-sized companies, large corporations, system houses, public institutions, universities, and more.
To provide our customers with fast access to the best technologies and solutions, we have established close partnerships with leading IT manufacturers at an early stage. In collaboration with our partners, we develop tailored solutions that precisely meet our customers' individual needs and goals.
Our direct partners include global market leaders in component manufacturing, such as ASUS, Gigabyte, AMD, Samsung, NVIDIA, and more. As a result, we can provide our customers rapid access to the latest product lines.
Moreover, we are actively involved in the open-source community and contribute by releasing our own software stack and benchmarking scripts, providing our customers with a competitive edge in developing and deploying their products, while ensuring our solutions remain at the forefront of technological advancements.
Get working solutions fast, not just a bunch of components
As system integrators we build, sell, host and rent HPC servers mainly based on AMD CPUs, NVIDIA GPUs/accelerators and ASUS barebones. This choice is based on comparisons and benchmarks with components from similar suppliers and is based on years of experience in considering durability, reliability, user-friendliness and service. As direct ASUS partners we can guarantee the shortest delivery times and best prices.
Optimized HPC Servers for AI Excellence
We put a lot of effort into optimally balancing the components of our products and also offer some added value on the software side. AIME servers are preconfigured and ready to use for deep learning and model deployment out of the box. In contrast to a generic system provider we specialize in HPC servers optimized for AI development and inference.
Software Stack for AI Expertise
With our software stack, we are strengthening our position as a specialist for HPC hardware in the field of artificial intelligence. It demonstrates our expertise in this area and shows that our specialization in the AI field is application- oriented and based on passion and experience.
Continuous Innovation
By continuously optimizing costs, efficiency and performance, we maintain the fast pace to lay the foundation for better and more reliable AI solutions. We continue to expand our software stack, adding features, open source models and expand functionality.
GDPR-Compliant AI Operations
AIME is a big advocate of AI safety and the privacy of personal data. For the GDPR- compliant operation of AI models we offer several solutions: To operate the bought or leased servers on-site in your own company domain or host or rent baremetal servers that can be run in the sovereignty of the customer. All our cloud offers are operated in Germany and do comply with the German data sovereignty laws, more strict than the GDPR.
We also provide scalable API endpoints to run AI services (like LLMs) in a GDPR- compliant way.
Professionally Engineered & Tested
Because we take quality and reliability very seriously, we assemble exclusively at our Berlin, Germany site. This allows us to ensure that our solutions meet the highest standards and reliably serve our customers. Each AIME system undergoes a comprehensive testing series, including endurance and burn-in/stress tests under full load, to ensure it meets our high-quality standards before delivery. We also check and configure firmware and BIOS for optimal operation.
Preconfigured for Optimal Operation
We install a for computing optimized Ubuntu Linux operating system with latest drivers and frameworks like Tensorflow and PyTorch, as well as the AIME MLC Framework, so that the server and GPUs are ready to be used out of the box, without requiring tedious driver or framework installation. Just login and start right away with your favourite Deep Learning framework.
The Founding of AIME: A Story of Innovation
During the development of an early deep learning-based speech recognition model in 2018, the AIME founders realized that there were no satisfactory GPU servers on the market that could operate 24/7 at full capacity without throttling due to overheating caused by poor cooling concepts. To optimize this, they designed the liquid-cooled workstation T400, which met their high standards. When showcased at a startup exhibition, the workstation's performance drew significant attention, prompting the founders to establish AIME and bring the workstation to market. For further scaling compute power, rack servers were added to the program. The selection of components was always based on comparisons under high loads and benchmark results, with the goal of achieving optimal cooling and power supply for maximum performance at the best price-performance ratio.
Parallel to this, AIME developed the AIME MLC framework, which significantly simplifies the development and training of AI models.
During the COVID-19 pandemic, logistical issues severely impacted hardware shipping. To mitigate this, AIME offered customers the option to host their purchased hardware at the AIME colocation. This led to the creation of the AIME Cloud and the Buy & Host offering. As the variety of AI models grew, the focus shifted from training to inference. In response, AIME decided to develop an API server for model deployment.
AIME Labs
In our AIME Labs we develop software, train and benchmark AI models to facilitate and support the work of our customers: We offer the powerful AIME API Server for inference, our AIME MLC container framework for AI training and our benchmark suite as open source solutions for free, which assists the development and deployment of ML/DL models.
Find more information in the software section, in our blog articles and our Github page.
Building Trust through Transparency and Reliability
Our customers' trust is founded on our transparency, reliability, and shared goal of developing the optimal solution for their specific application. We put our customers at the center of our operations, providing individualized consulting to craft solutions that deliver measurable benefits and meet their unique requirements.
Customized HPC Solutions for Your AI Application
AIME offers tailored solutions that cater to your AI use case. Our team of specialists accompanies you from the initial idea to implementation, working together to find the hardware solution that meets your requirements best.
Expertise and Experience You Can Trust
With years of experience and expertise, we provide expert guidance and solutions for complex Artificial Intelligence or Big Data applications and specific use cases. Our management team and in-house developers specialize in HPC hardware and software development of AI models, enabling us to provide application-oriented consulting.
Contact us
Call us or send us an email if you have any questions. We would be glad to assist you to find the best suitable compute solution.
-
AIME GmbH
Marienburger Str. 1
10405 Berlin
Germany