FAQs
FAQs
Who programs the FPGA’s?
During the order and configuration process I/ONX will program the FPGA's and after installation if the client needs additional programming ionic typically can respond within 24 hours for a standard program and if it's custom, we will quote a timeline. The majority of the clients will have the in-house talent to program FPGA without I/ONX. I/ONX has a roadmap to a marketplace for clients to access existing FPGA programming.
Who programs the FPGA’s?
During the order and configuration process I/ONX will program the FPGA's and after installation if the client needs additional programming ionic typically can respond within 24 hours for a standard program and if it's custom, we will quote a timeline. The majority of the clients will have the in-house talent to program FPGA without I/ONX. I/ONX has a roadmap to a marketplace for clients to access existing FPGA programming.
Who programs the FPGA’s?
During the order and configuration process I/ONX will program the FPGA's and after installation if the client needs additional programming ionic typically can respond within 24 hours for a standard program and if it's custom, we will quote a timeline. The majority of the clients will have the in-house talent to program FPGA without I/ONX. I/ONX has a roadmap to a marketplace for clients to access existing FPGA programming.
Does the platform come with support?
Yes, I/ONX provides a white glove support through the ordering process, configuration process, delivery, and implementation. Post implementation, I/ONX support is available through a ticketing system and slack. Premium support can be purchased at any point of ownership.
Does the platform come with support?
Yes, I/ONX provides a white glove support through the ordering process, configuration process, delivery, and implementation. Post implementation, I/ONX support is available through a ticketing system and slack. Premium support can be purchased at any point of ownership.
Does the platform come with support?
Yes, I/ONX provides a white glove support through the ordering process, configuration process, delivery, and implementation. Post implementation, I/ONX support is available through a ticketing system and slack. Premium support can be purchased at any point of ownership.
What is the delivery time?
From order, it is typically 90 days or less.
What is the delivery time?
From order, it is typically 90 days or less.
What is the delivery time?
From order, it is typically 90 days or less.
Is there a hardware warranty?
Yes, all the hardware comes with a transferable contract from the originating OEM and the I/ONX proprietary hardware comes with a 36 month warranty.
Is there a hardware warranty?
Yes, all the hardware comes with a transferable contract from the originating OEM and the I/ONX proprietary hardware comes with a 36 month warranty.
Is there a hardware warranty?
Yes, all the hardware comes with a transferable contract from the originating OEM and the I/ONX proprietary hardware comes with a 36 month warranty.
Why would I want different OEMs for GPUs?
Some AI models run more efficiently on OEM’s GPU, and other models run more efficiently on another OEM’s GPU. When a model is run on the wrong chip, it uses more power, creating more heat with a slower final solution.
Why would I want different OEMs for GPUs?
Some AI models run more efficiently on OEM’s GPU, and other models run more efficiently on another OEM’s GPU. When a model is run on the wrong chip, it uses more power, creating more heat with a slower final solution.
Why would I want different OEMs for GPUs?
Some AI models run more efficiently on OEM’s GPU, and other models run more efficiently on another OEM’s GPU. When a model is run on the wrong chip, it uses more power, creating more heat with a slower final solution.
Why would I use an ASIC and FPGA for AI over a GPU?
All three chips have advantages and disadvantages with specific types of AI models and workloads. Using the right chip increases the output performance and lowers the usage of power and cooling. Using the right tool significantly improves the performance and lowers the OPEX.
Why would I use an ASIC and FPGA for AI over a GPU?
All three chips have advantages and disadvantages with specific types of AI models and workloads. Using the right chip increases the output performance and lowers the usage of power and cooling. Using the right tool significantly improves the performance and lowers the OPEX.
Why would I use an ASIC and FPGA for AI over a GPU?
All three chips have advantages and disadvantages with specific types of AI models and workloads. Using the right chip increases the output performance and lowers the usage of power and cooling. Using the right tool significantly improves the performance and lowers the OPEX.
Does this work for BITCOIN mining?
Our platform does not provide any advantage or efficiency in coin mining.
Does this work for BITCOIN mining?
Our platform does not provide any advantage or efficiency in coin mining.
Does this work for BITCOIN mining?
Our platform does not provide any advantage or efficiency in coin mining.
What models have been tested on the platform?
Whisper V3 Large, Stable Diffusion v1-4
What models have been tested on the platform?
Whisper V3 Large, Stable Diffusion v1-4
What models have been tested on the platform?
Whisper V3 Large, Stable Diffusion v1-4
How do we compete with Groq (the cloud GPU company)?
We are competitors on Inference and outperform on Learning.
How do we compete with Groq (the cloud GPU company)?
We are competitors on Inference and outperform on Learning.
How do we compete with Groq (the cloud GPU company)?
We are competitors on Inference and outperform on Learning.
Does it work with OAM (Open Compute Project Accelerator Module) and PCIE connections (Peripheral Component Interconnect Express)?
Yes, it works with both.
Does it work with OAM (Open Compute Project Accelerator Module) and PCIE connections (Peripheral Component Interconnect Express)?
Yes, it works with both.
Does it work with OAM (Open Compute Project Accelerator Module) and PCIE connections (Peripheral Component Interconnect Express)?
Yes, it works with both.
Does the software enhance CPU stacks for cloud or regular computing?
No.
Does the software enhance CPU stacks for cloud or regular computing?
No.
Does the software enhance CPU stacks for cloud or regular computing?
No.
Does the platform work with VLM (Vision-Language Models) workloads?
Yes.
Does the platform work with VLM (Vision-Language Models) workloads?
Yes.
Does the platform work with VLM (Vision-Language Models) workloads?
Yes.
What are the dimensions and weight of the rack?
Dimensions and weight vary based on configuration. The I/ONX systems are housed in a standard 42U rack, following industry-standard dimensions of approximately 24” W × 42” D × 78” H (600 mm × 1070 mm × 2000 mm). Custom configurations or hardware options do not change the overall weight compared to traditional enterprise racks. The one exception is the I/ONX “Synth”, which is a portable mobile unit for extreme edge computing that is the size of a carry-on suitcase.
What are the dimensions and weight of the rack?
Dimensions and weight vary based on configuration. The I/ONX systems are housed in a standard 42U rack, following industry-standard dimensions of approximately 24” W × 42” D × 78” H (600 mm × 1070 mm × 2000 mm). Custom configurations or hardware options do not change the overall weight compared to traditional enterprise racks. The one exception is the I/ONX “Synth”, which is a portable mobile unit for extreme edge computing that is the size of a carry-on suitcase.
What are the dimensions and weight of the rack?
Dimensions and weight vary based on configuration. The I/ONX systems are housed in a standard 42U rack, following industry-standard dimensions of approximately 24” W × 42” D × 78” H (600 mm × 1070 mm × 2000 mm). Custom configurations or hardware options do not change the overall weight compared to traditional enterprise racks. The one exception is the I/ONX “Synth”, which is a portable mobile unit for extreme edge computing that is the size of a carry-on suitcase.
Are there any special mounting or floor load requirements?
No. The system does not require any special mounting or floor load accommodations. It is fully compatible with standard data center racks and flooring specifications. Any specific facility or customer mounting requirements can be easily met during installation.
Are there any special mounting or floor load requirements?
No. The system does not require any special mounting or floor load accommodations. It is fully compatible with standard data center racks and flooring specifications. Any specific facility or customer mounting requirements can be easily met during installation.
Are there any special mounting or floor load requirements?
No. The system does not require any special mounting or floor load accommodations. It is fully compatible with standard data center racks and flooring specifications. Any specific facility or customer mounting requirements can be easily met during installation.
Will these racks have liquid cooling, immersion, or other advanced solutions?
Yes. Cooling configurations are fully customizable based on site requirements. Our standard 20 kW and 40 kW systems are typically air-cooled, but we can design and deploy liquid-cooled, immersion, or hybrid solutions as needed. I/ONX systems are not limited by any specific cooling technology and can be adapted to meet facility and environmental specifications.
Will these racks have liquid cooling, immersion, or other advanced solutions?
Yes. Cooling configurations are fully customizable based on site requirements. Our standard 20 kW and 40 kW systems are typically air-cooled, but we can design and deploy liquid-cooled, immersion, or hybrid solutions as needed. I/ONX systems are not limited by any specific cooling technology and can be adapted to meet facility and environmental specifications.
Will these racks have liquid cooling, immersion, or other advanced solutions?
Yes. Cooling configurations are fully customizable based on site requirements. Our standard 20 kW and 40 kW systems are typically air-cooled, but we can design and deploy liquid-cooled, immersion, or hybrid solutions as needed. I/ONX systems are not limited by any specific cooling technology and can be adapted to meet facility and environmental specifications.
What is the preferred rack layout and containment strategy (e.g., hot/cold aisle, rear-door heat exchangers)?
The optimal rack layout and containment strategy will depend on the specific configuration and facility environment. I/ONX systems are designed to be flexible and adaptable, supporting standard hot/cold aisle containment, rear-door heat exchangers, or custom airflow solutions as required. We will work with your facilities team to ensure the layout aligns with your cooling and operational objectives.
What is the preferred rack layout and containment strategy (e.g., hot/cold aisle, rear-door heat exchangers)?
The optimal rack layout and containment strategy will depend on the specific configuration and facility environment. I/ONX systems are designed to be flexible and adaptable, supporting standard hot/cold aisle containment, rear-door heat exchangers, or custom airflow solutions as required. We will work with your facilities team to ensure the layout aligns with your cooling and operational objectives.
What is the preferred rack layout and containment strategy (e.g., hot/cold aisle, rear-door heat exchangers)?
The optimal rack layout and containment strategy will depend on the specific configuration and facility environment. I/ONX systems are designed to be flexible and adaptable, supporting standard hot/cold aisle containment, rear-door heat exchangers, or custom airflow solutions as required. We will work with your facilities team to ensure the layout aligns with your cooling and operational objectives.
What are your total power requirements (kW per rack and overall)?
Power requirements are configurable based on system design and facility capacity. Racks can be tailored to operate anywhere between 10 kW and 160 kW, with each rack capable of supporting up to 64 accelerators. All power distribution components — including PDUs, plug types, whip phase, voltage, and amperage can be customized and assembled to meet your facility’s specific electrical and infrastructure requirements.
What are your total power requirements (kW per rack and overall)?
Power requirements are configurable based on system design and facility capacity. Racks can be tailored to operate anywhere between 10 kW and 160 kW, with each rack capable of supporting up to 64 accelerators. All power distribution components — including PDUs, plug types, whip phase, voltage, and amperage can be customized and assembled to meet your facility’s specific electrical and infrastructure requirements.
What are your total power requirements (kW per rack and overall)?
Power requirements are configurable based on system design and facility capacity. Racks can be tailored to operate anywhere between 10 kW and 160 kW, with each rack capable of supporting up to 64 accelerators. All power distribution components — including PDUs, plug types, whip phase, voltage, and amperage can be customized and assembled to meet your facility’s specific electrical and infrastructure requirements.
What is the power draw per rack (peak and average)?
Power draw varies based on the specific configuration. For example, a typical rack may include two servers with 16× AMD MI355X GPUs and 48 Tenstorrent accelerators configured for inference workloads. Depending on the mix of components and utilization, the average power draw typically aligns with facility specifications, while peak draw can approach the upper limit of the configured power envelope. I/ONX can customize each rack’s power profile to meet your facility requirements or limitations, ensuring compatibility with existing power and cooling infrastructure.
What is the power draw per rack (peak and average)?
Power draw varies based on the specific configuration. For example, a typical rack may include two servers with 16× AMD MI355X GPUs and 48 Tenstorrent accelerators configured for inference workloads. Depending on the mix of components and utilization, the average power draw typically aligns with facility specifications, while peak draw can approach the upper limit of the configured power envelope. I/ONX can customize each rack’s power profile to meet your facility requirements or limitations, ensuring compatibility with existing power and cooling infrastructure.
What is the power draw per rack (peak and average)?
Power draw varies based on the specific configuration. For example, a typical rack may include two servers with 16× AMD MI355X GPUs and 48 Tenstorrent accelerators configured for inference workloads. Depending on the mix of components and utilization, the average power draw typically aligns with facility specifications, while peak draw can approach the upper limit of the configured power envelope. I/ONX can customize each rack’s power profile to meet your facility requirements or limitations, ensuring compatibility with existing power and cooling infrastructure.
What are the voltage and amperage requirements?
Typical configurations operate at 208V, with amperage requirements dependent on the specific processor and accelerator configuration. I/ONX systems can be engineered to support a wide range of power delivery options, and voltage and current specifications can be tailored to match your facility’s available power infrastructure and distribution standards.
What are the voltage and amperage requirements?
Typical configurations operate at 208V, with amperage requirements dependent on the specific processor and accelerator configuration. I/ONX systems can be engineered to support a wide range of power delivery options, and voltage and current specifications can be tailored to match your facility’s available power infrastructure and distribution standards.
What are the voltage and amperage requirements?
Typical configurations operate at 208V, with amperage requirements dependent on the specific processor and accelerator configuration. I/ONX systems can be engineered to support a wide range of power delivery options, and voltage and current specifications can be tailored to match your facility’s available power infrastructure and distribution standards.
Do you require dual power feeds (A/B)?
Dual power feeds (A/B) are not required, but they are strongly preferred to ensure power redundancy and high availability. The system can operate on a single feed if necessary, but utilizing dual feeds is recommended for fault tolerance and continuous operation in production environments.
Do you require dual power feeds (A/B)?
Dual power feeds (A/B) are not required, but they are strongly preferred to ensure power redundancy and high availability. The system can operate on a single feed if necessary, but utilizing dual feeds is recommended for fault tolerance and continuous operation in production environments.
Do you require dual power feeds (A/B)?
Dual power feeds (A/B) are not required, but they are strongly preferred to ensure power redundancy and high availability. The system can operate on a single feed if necessary, but utilizing dual feeds is recommended for fault tolerance and continuous operation in production environments.
What is the anticipated PUE (per rack and overall)?
The Power Usage Effectiveness (PUE) will depend on both the rack configuration and the facility’s cooling and power infrastructure. I/ONX systems are designed for high efficiency and can integrate seamlessly into data centers with optimized PUE values, typically ranging between 1.1 and 1.4 in well-managed environments. Actual performance will align with the facility’s design and operational parameters.
What is the anticipated PUE (per rack and overall)?
The Power Usage Effectiveness (PUE) will depend on both the rack configuration and the facility’s cooling and power infrastructure. I/ONX systems are designed for high efficiency and can integrate seamlessly into data centers with optimized PUE values, typically ranging between 1.1 and 1.4 in well-managed environments. Actual performance will align with the facility’s design and operational parameters.
What is the anticipated PUE (per rack and overall)?
The Power Usage Effectiveness (PUE) will depend on both the rack configuration and the facility’s cooling and power infrastructure. I/ONX systems are designed for high efficiency and can integrate seamlessly into data centers with optimized PUE values, typically ranging between 1.1 and 1.4 in well-managed environments. Actual performance will align with the facility’s design and operational parameters.
What is the heat output per rack (BTU/hr)?
This will depend on the configuration. I/ONX can provide precise thermal specifications once the final configuration and facility parameters are determined. I/ONX can also be configured to meet a BTU/hr limitation.
What is the heat output per rack (BTU/hr)?
This will depend on the configuration. I/ONX can provide precise thermal specifications once the final configuration and facility parameters are determined. I/ONX can also be configured to meet a BTU/hr limitation.
What is the heat output per rack (BTU/hr)?
This will depend on the configuration. I/ONX can provide precise thermal specifications once the final configuration and facility parameters are determined. I/ONX can also be configured to meet a BTU/hr limitation.
Requirements for high-speed interconnects (e.g., 400G)?
High-speed interconnects are not required for standard system operation. Top-of-rack (TOR) networking configurations are customer-specific, and I/ONX can accommodate any preferred network architecture or vendor. The system is fully compatible with all major OEM networking solutions, including 100G, 200G, and 400G.
Requirements for high-speed interconnects (e.g., 400G)?
High-speed interconnects are not required for standard system operation. Top-of-rack (TOR) networking configurations are customer-specific, and I/ONX can accommodate any preferred network architecture or vendor. The system is fully compatible with all major OEM networking solutions, including 100G, 200G, and 400G.
Requirements for high-speed interconnects (e.g., 400G)?
High-speed interconnects are not required for standard system operation. Top-of-rack (TOR) networking configurations are customer-specific, and I/ONX can accommodate any preferred network architecture or vendor. The system is fully compatible with all major OEM networking solutions, including 100G, 200G, and 400G.
Requirements to specific carriers, cloud on-ramps, or dark fiber?
I/ONX systems do not have any specific carrier or network requirements. The technology treats all connections as standard network interfaces originating from the data center’s meet-me room. Whether deployed in a multi-tenant or single-tenant facility, any network type, including lit services, blended bandwidth, dark fiber, CDN connections, cross-connects, or cloud on-ramps, can be integrated to the top-of-rack or end-of-aisle switch. This flexibility ensures seamless integration with existing carrier-neutral or dedicated network environments.
Requirements to specific carriers, cloud on-ramps, or dark fiber?
I/ONX systems do not have any specific carrier or network requirements. The technology treats all connections as standard network interfaces originating from the data center’s meet-me room. Whether deployed in a multi-tenant or single-tenant facility, any network type, including lit services, blended bandwidth, dark fiber, CDN connections, cross-connects, or cloud on-ramps, can be integrated to the top-of-rack or end-of-aisle switch. This flexibility ensures seamless integration with existing carrier-neutral or dedicated network environments.
Requirements to specific carriers, cloud on-ramps, or dark fiber?
I/ONX systems do not have any specific carrier or network requirements. The technology treats all connections as standard network interfaces originating from the data center’s meet-me room. Whether deployed in a multi-tenant or single-tenant facility, any network type, including lit services, blended bandwidth, dark fiber, CDN connections, cross-connects, or cloud on-ramps, can be integrated to the top-of-rack or end-of-aisle switch. This flexibility ensures seamless integration with existing carrier-neutral or dedicated network environments.
What level of redundancy is needed (N, N+1, 2N)?
The level of redundancy will be determined by the customer’s facility design and availability requirements. I/ONX will design the system to align with those specifications and can provide guidance and recommendations once the target redundancy model (e.g., N, N+1, or 2N) is defined. Our architecture supports all standard redundancy configurations, ensuring alignment with enterprise, research, and HPC reliability objectives.
What level of redundancy is needed (N, N+1, 2N)?
The level of redundancy will be determined by the customer’s facility design and availability requirements. I/ONX will design the system to align with those specifications and can provide guidance and recommendations once the target redundancy model (e.g., N, N+1, or 2N) is defined. Our architecture supports all standard redundancy configurations, ensuring alignment with enterprise, research, and HPC reliability objectives.
What level of redundancy is needed (N, N+1, 2N)?
The level of redundancy will be determined by the customer’s facility design and availability requirements. I/ONX will design the system to align with those specifications and can provide guidance and recommendations once the target redundancy model (e.g., N, N+1, or 2N) is defined. Our architecture supports all standard redundancy configurations, ensuring alignment with enterprise, research, and HPC reliability objectives.
What are the bandwidth and throughput requirements?
Bandwidth and throughput requirements will depend on the specific use case and workload profile. For example, AI training clusters typically require higher aggregate bandwidth to support data movement between accelerators and storage, while inference clusters prioritize low-latency connectivity to optimize real-time responsiveness. I/ONX can assist in designing and optimizing network configurations to match performance objectives, whether for high-throughput training environments or latency-sensitive inference applications.
What are the bandwidth and throughput requirements?
Bandwidth and throughput requirements will depend on the specific use case and workload profile. For example, AI training clusters typically require higher aggregate bandwidth to support data movement between accelerators and storage, while inference clusters prioritize low-latency connectivity to optimize real-time responsiveness. I/ONX can assist in designing and optimizing network configurations to match performance objectives, whether for high-throughput training environments or latency-sensitive inference applications.
What are the bandwidth and throughput requirements?
Bandwidth and throughput requirements will depend on the specific use case and workload profile. For example, AI training clusters typically require higher aggregate bandwidth to support data movement between accelerators and storage, while inference clusters prioritize low-latency connectivity to optimize real-time responsiveness. I/ONX can assist in designing and optimizing network configurations to match performance objectives, whether for high-throughput training environments or latency-sensitive inference applications.
What physical security measures do the racks support?
Physical security options can be configured to meet customer specifications, and the racks have no inherent limitations. I/ONX systems can support a variety of security measures, including locking front and rear doors, badge or key access controls, intrusion detection sensors, and rack-level monitoring. Security configurations are fully customizable to align with data center policies, compliance standards, and customer-specific requirements.
What physical security measures do the racks support?
Physical security options can be configured to meet customer specifications, and the racks have no inherent limitations. I/ONX systems can support a variety of security measures, including locking front and rear doors, badge or key access controls, intrusion detection sensors, and rack-level monitoring. Security configurations are fully customizable to align with data center policies, compliance standards, and customer-specific requirements.
What physical security measures do the racks support?
Physical security options can be configured to meet customer specifications, and the racks have no inherent limitations. I/ONX systems can support a variety of security measures, including locking front and rear doors, badge or key access controls, intrusion detection sensors, and rack-level monitoring. Security configurations are fully customizable to align with data center policies, compliance standards, and customer-specific requirements.
Does the rack require any specific cabling or layout preferences?
Yes. We can provide reference architectures.
Does the rack require any specific cabling or layout preferences?
Yes. We can provide reference architectures.
Does the rack require any specific cabling or layout preferences?
Yes. We can provide reference architectures.
What are the sustainability metrics of a full rack?
This will depend on the configuration.
What are the sustainability metrics of a full rack?
This will depend on the configuration.
What are the sustainability metrics of a full rack?
This will depend on the configuration.
Does the rack and hardware have any compliance certifications? (SOC 2, ISO 27001, HIPAA, PCI)?
We do not have any specific certifications for the entire rack, individual LRUs are certified.
Does the rack and hardware have any compliance certifications? (SOC 2, ISO 27001, HIPAA, PCI)?
We do not have any specific certifications for the entire rack, individual LRUs are certified.
Does the rack and hardware have any compliance certifications? (SOC 2, ISO 27001, HIPAA, PCI)?
We do not have any specific certifications for the entire rack, individual LRUs are certified.
Does the rack require dedicated cages or air-gapped zones?
No.
Does the rack require dedicated cages or air-gapped zones?
No.
Does the rack require dedicated cages or air-gapped zones?
No.
What type of DCIM/APIs integration does the rack support?
Restful API(s) but can support specific needs, e.g. Redfish, etc.
What type of DCIM/APIs integration does the rack support?
Restful API(s) but can support specific needs, e.g. Redfish, etc.
What type of DCIM/APIs integration does the rack support?
Restful API(s) but can support specific needs, e.g. Redfish, etc.
What type of environmental monitoring does the rack support (temperature, humidity, etc.)?
I/ONX systems typically utilize Schneider Electric intelligent PDUs, which support a comprehensive range of environmental sensors and monitoring accessories. These include temperature, humidity, airflow, and door contact sensors, as well as integration with DCIM platforms for real-time visibility and alerting. Additional monitoring solutions can be incorporated based on customer preference or existing facility systems, ensuring seamless integration into any data center management environment.
What type of environmental monitoring does the rack support (temperature, humidity, etc.)?
I/ONX systems typically utilize Schneider Electric intelligent PDUs, which support a comprehensive range of environmental sensors and monitoring accessories. These include temperature, humidity, airflow, and door contact sensors, as well as integration with DCIM platforms for real-time visibility and alerting. Additional monitoring solutions can be incorporated based on customer preference or existing facility systems, ensuring seamless integration into any data center management environment.
What type of environmental monitoring does the rack support (temperature, humidity, etc.)?
I/ONX systems typically utilize Schneider Electric intelligent PDUs, which support a comprehensive range of environmental sensors and monitoring accessories. These include temperature, humidity, airflow, and door contact sensors, as well as integration with DCIM platforms for real-time visibility and alerting. Additional monitoring solutions can be incorporated based on customer preference or existing facility systems, ensuring seamless integration into any data center management environment.
What SLAs should we expect for uptime, latency, and support?
Service Level Agreements (SLAs) for uptime and latency are typically defined at the facility or service provider level, based on the hosting environment and network infrastructure. All I/ONX hardware includes a three (3) year warranty and comprehensive service contract, covering hardware replacement, firmware updates, and engineering support. Extended warranty and support options are also available to align with customer operational requirements and uptime objectives.
What SLAs should we expect for uptime, latency, and support?
Service Level Agreements (SLAs) for uptime and latency are typically defined at the facility or service provider level, based on the hosting environment and network infrastructure. All I/ONX hardware includes a three (3) year warranty and comprehensive service contract, covering hardware replacement, firmware updates, and engineering support. Extended warranty and support options are also available to align with customer operational requirements and uptime objectives.
What SLAs should we expect for uptime, latency, and support?
Service Level Agreements (SLAs) for uptime and latency are typically defined at the facility or service provider level, based on the hosting environment and network infrastructure. All I/ONX hardware includes a three (3) year warranty and comprehensive service contract, covering hardware replacement, firmware updates, and engineering support. Extended warranty and support options are also available to align with customer operational requirements and uptime objectives.
What is the capacity and performance of a rack, and what should we expect?
This will depend on the configuration.
What is the capacity and performance of a rack, and what should we expect?
This will depend on the configuration.
What is the capacity and performance of a rack, and what should we expect?
This will depend on the configuration.
What is the redundancy within the rack, e.g., can the rack support certain failures?
Redundancy within the rack is configurable and depends on the system design and operational requirements. I/ONX supports a range of high-availability configurations that ensure continued operation even in the event of partial hardware or component failures. Key redundancy options include: Dual head nodes for active/standby or clustered failover control. Redundant power supplies (1+1 or N+1) for uninterrupted operation in case of PSU failure. Network redundancy, including dual top-of-rack (TOR) switches and bonded NICs for resilient connectivity. Redundant storage controllers and mirrored NVMe configurations to maintain data availability during component replacement or failure. PCIe fabric path redundancy, allowing failover between accelerator and compute nodes.
What is the redundancy within the rack, e.g., can the rack support certain failures?
Redundancy within the rack is configurable and depends on the system design and operational requirements. I/ONX supports a range of high-availability configurations that ensure continued operation even in the event of partial hardware or component failures. Key redundancy options include: Dual head nodes for active/standby or clustered failover control. Redundant power supplies (1+1 or N+1) for uninterrupted operation in case of PSU failure. Network redundancy, including dual top-of-rack (TOR) switches and bonded NICs for resilient connectivity. Redundant storage controllers and mirrored NVMe configurations to maintain data availability during component replacement or failure. PCIe fabric path redundancy, allowing failover between accelerator and compute nodes.
What is the redundancy within the rack, e.g., can the rack support certain failures?
Redundancy within the rack is configurable and depends on the system design and operational requirements. I/ONX supports a range of high-availability configurations that ensure continued operation even in the event of partial hardware or component failures. Key redundancy options include: Dual head nodes for active/standby or clustered failover control. Redundant power supplies (1+1 or N+1) for uninterrupted operation in case of PSU failure. Network redundancy, including dual top-of-rack (TOR) switches and bonded NICs for resilient connectivity. Redundant storage controllers and mirrored NVMe configurations to maintain data availability during component replacement or failure. PCIe fabric path redundancy, allowing failover between accelerator and compute nodes.
Is the infrastructure compatible with OpenStack?
Yes. The I/ONX infrastructure is fully compatible with OpenStack and supports all major services required for HPC and AI workloads. This includes: Nova – Compute service supporting both virtual machines and bare-metal provisioning, optimized for GPU and accelerator passthrough. Neutron – Advanced networking with SR-IOV, RDMA, and multi-fabric support for low-latency data paths across heterogeneous accelerators. Cinder – Block storage service supporting high-performance NVMe and PCIe-based storage backends. Swift – Object storage for large-scale AI training datasets and research archives. Keystone – Centralized identity and access management with role-based access control (RBAC) for secure multi-user environments. Glance – Image management for fast deployment of VM, container, or bare-metal workloads. Horizon – Web-based dashboard integrated with I/ONX orchestration tools for cluster and workload management. The I/ONX platform also supports custom OpenStack drivers and APIs for managing heterogeneous compute environments, enabling seamless orchestration across CPUs, GPUs, FPGAs, and AI accelerators from multiple OEMs. This architecture provides the flexibility to run mixed workloads — such as AI model training, inference, and data processing under a unified OpenStack-managed environment.
Is the infrastructure compatible with OpenStack?
Yes. The I/ONX infrastructure is fully compatible with OpenStack and supports all major services required for HPC and AI workloads. This includes: Nova – Compute service supporting both virtual machines and bare-metal provisioning, optimized for GPU and accelerator passthrough. Neutron – Advanced networking with SR-IOV, RDMA, and multi-fabric support for low-latency data paths across heterogeneous accelerators. Cinder – Block storage service supporting high-performance NVMe and PCIe-based storage backends. Swift – Object storage for large-scale AI training datasets and research archives. Keystone – Centralized identity and access management with role-based access control (RBAC) for secure multi-user environments. Glance – Image management for fast deployment of VM, container, or bare-metal workloads. Horizon – Web-based dashboard integrated with I/ONX orchestration tools for cluster and workload management. The I/ONX platform also supports custom OpenStack drivers and APIs for managing heterogeneous compute environments, enabling seamless orchestration across CPUs, GPUs, FPGAs, and AI accelerators from multiple OEMs. This architecture provides the flexibility to run mixed workloads — such as AI model training, inference, and data processing under a unified OpenStack-managed environment.
Is the infrastructure compatible with OpenStack?
Yes. The I/ONX infrastructure is fully compatible with OpenStack and supports all major services required for HPC and AI workloads. This includes: Nova – Compute service supporting both virtual machines and bare-metal provisioning, optimized for GPU and accelerator passthrough. Neutron – Advanced networking with SR-IOV, RDMA, and multi-fabric support for low-latency data paths across heterogeneous accelerators. Cinder – Block storage service supporting high-performance NVMe and PCIe-based storage backends. Swift – Object storage for large-scale AI training datasets and research archives. Keystone – Centralized identity and access management with role-based access control (RBAC) for secure multi-user environments. Glance – Image management for fast deployment of VM, container, or bare-metal workloads. Horizon – Web-based dashboard integrated with I/ONX orchestration tools for cluster and workload management. The I/ONX platform also supports custom OpenStack drivers and APIs for managing heterogeneous compute environments, enabling seamless orchestration across CPUs, GPUs, FPGAs, and AI accelerators from multiple OEMs. This architecture provides the flexibility to run mixed workloads — such as AI model training, inference, and data processing under a unified OpenStack-managed environment.
Are there any limitations or customizations required to run OpenStack on this platform?
No.
Are there any limitations or customizations required to run OpenStack on this platform?
No.
Are there any limitations or customizations required to run OpenStack on this platform?
No.
Is there support for multi-tenancy and resource isolation within the virtualization layer?
Yes. The I/ONX platform fully supports multi-tenancy and resource isolation within the virtualization and orchestration layers. Common deployment patterns include tools such as Terraform for infrastructure-as-code provisioning and LXD for lightweight container-based virtualization. The system can isolate compute, storage, and network resources across tenants using hardware-level partitioning, virtual network segmentation (VLANs/VXLANs), and dedicated accelerator assignment (e.g., GPU passthrough). This ensures secure workload separation, performance consistency, and policy-based access control across shared infrastructure. I/ONX can also integrate with OpenStack, Kubernetes, or other orchestration frameworks to extend multi-tenant capabilities to hybrid and distributed environments.
Is there support for multi-tenancy and resource isolation within the virtualization layer?
Yes. The I/ONX platform fully supports multi-tenancy and resource isolation within the virtualization and orchestration layers. Common deployment patterns include tools such as Terraform for infrastructure-as-code provisioning and LXD for lightweight container-based virtualization. The system can isolate compute, storage, and network resources across tenants using hardware-level partitioning, virtual network segmentation (VLANs/VXLANs), and dedicated accelerator assignment (e.g., GPU passthrough). This ensures secure workload separation, performance consistency, and policy-based access control across shared infrastructure. I/ONX can also integrate with OpenStack, Kubernetes, or other orchestration frameworks to extend multi-tenant capabilities to hybrid and distributed environments.
Is there support for multi-tenancy and resource isolation within the virtualization layer?
Yes. The I/ONX platform fully supports multi-tenancy and resource isolation within the virtualization and orchestration layers. Common deployment patterns include tools such as Terraform for infrastructure-as-code provisioning and LXD for lightweight container-based virtualization. The system can isolate compute, storage, and network resources across tenants using hardware-level partitioning, virtual network segmentation (VLANs/VXLANs), and dedicated accelerator assignment (e.g., GPU passthrough). This ensures secure workload separation, performance consistency, and policy-based access control across shared infrastructure. I/ONX can also integrate with OpenStack, Kubernetes, or other orchestration frameworks to extend multi-tenant capabilities to hybrid and distributed environments.
Can we integrate OpenStack with existing orchestration or monitoring tools?
Yes. OpenStack can be seamlessly integrated with existing orchestration and monitoring tools. The I/ONX platform is designed with open APIs and modular architecture, allowing interoperability with a wide range of solutions for automation, observability, and infrastructure management. We do not anticipate any limitations to integration; however, the specific approach may depend on the tools and frameworks currently in use. Common integrations include: Orchestration: Terraform, Ansible, Kubernetes, Slurm, or HashiCorp Nomad. Monitoring and Telemetry: Prometheus, Grafana, Zabbix, Nagios, and OpenStack’s native Ceilometer and Monasca services. Logging and Event Management: ELK/EFK stacks, Splunk, or custom SIEM solutions. This flexibility ensures that OpenStack within the I/ONX environment can extend and enhance existing operational workflows without requiring significant architectural changes.
Can we integrate OpenStack with existing orchestration or monitoring tools?
Yes. OpenStack can be seamlessly integrated with existing orchestration and monitoring tools. The I/ONX platform is designed with open APIs and modular architecture, allowing interoperability with a wide range of solutions for automation, observability, and infrastructure management. We do not anticipate any limitations to integration; however, the specific approach may depend on the tools and frameworks currently in use. Common integrations include: Orchestration: Terraform, Ansible, Kubernetes, Slurm, or HashiCorp Nomad. Monitoring and Telemetry: Prometheus, Grafana, Zabbix, Nagios, and OpenStack’s native Ceilometer and Monasca services. Logging and Event Management: ELK/EFK stacks, Splunk, or custom SIEM solutions. This flexibility ensures that OpenStack within the I/ONX environment can extend and enhance existing operational workflows without requiring significant architectural changes.
Can we integrate OpenStack with existing orchestration or monitoring tools?
Yes. OpenStack can be seamlessly integrated with existing orchestration and monitoring tools. The I/ONX platform is designed with open APIs and modular architecture, allowing interoperability with a wide range of solutions for automation, observability, and infrastructure management. We do not anticipate any limitations to integration; however, the specific approach may depend on the tools and frameworks currently in use. Common integrations include: Orchestration: Terraform, Ansible, Kubernetes, Slurm, or HashiCorp Nomad. Monitoring and Telemetry: Prometheus, Grafana, Zabbix, Nagios, and OpenStack’s native Ceilometer and Monasca services. Logging and Event Management: ELK/EFK stacks, Splunk, or custom SIEM solutions. This flexibility ensures that OpenStack within the I/ONX environment can extend and enhance existing operational workflows without requiring significant architectural changes.
Are there native APIs or SDKs available for deploying and managing inference workloads?
Yes. This is part of an upcoming release (October, 2025).
Are there native APIs or SDKs available for deploying and managing inference workloads?
Yes. This is part of an upcoming release (October, 2025).
Are there native APIs or SDKs available for deploying and managing inference workloads?
Yes. This is part of an upcoming release (October, 2025).
Which frameworks are supported (e.g., TensorFlow, PyTorch, ONNX)?
This will be chip-dependent. We have not found any limitations on framework support. (no limitations)
Which frameworks are supported (e.g., TensorFlow, PyTorch, ONNX)?
This will be chip-dependent. We have not found any limitations on framework support. (no limitations)
Which frameworks are supported (e.g., TensorFlow, PyTorch, ONNX)?
This will be chip-dependent. We have not found any limitations on framework support. (no limitations)
Is there support for hardware acceleration (e.g., GPU, FPGA, ASIC) via APIs?
Yes. The core architecture of the I/ONX platform is designed to support any accelerator from any OEM, all within the same rack. These accelerators are accessible and manageable through standardized APIs, providing a unified and flexible interface for orchestration and workload management. The I/ONX system includes a vendor-agnostic hardware abstraction layer that exposes compute, memory, and interconnect resources through APIs and frameworks such as: CUDA (NVIDIA) and ROCm (AMD) for GPU programming and management. OpenCL and SYCL for cross-platform accelerator compute. Vitis (Xilinx) and oneAPI (Intel) for FPGA and ASIC acceleration workflows. gRPC and RESTful APIs for integration with external orchestration, monitoring, and scheduling systems. *Examples but not limited to This standards-based approach allows seamless interoperability between heterogeneous compute resources, enabling organizations to mix and match GPUs, FPGAs, ASICs, and custom accelerators in a single environment. As a result, applications can efficiently leverage the most appropriate hardware for each workload, without being constrained by vendor-specific architectures.
Is there support for hardware acceleration (e.g., GPU, FPGA, ASIC) via APIs?
Yes. The core architecture of the I/ONX platform is designed to support any accelerator from any OEM, all within the same rack. These accelerators are accessible and manageable through standardized APIs, providing a unified and flexible interface for orchestration and workload management. The I/ONX system includes a vendor-agnostic hardware abstraction layer that exposes compute, memory, and interconnect resources through APIs and frameworks such as: CUDA (NVIDIA) and ROCm (AMD) for GPU programming and management. OpenCL and SYCL for cross-platform accelerator compute. Vitis (Xilinx) and oneAPI (Intel) for FPGA and ASIC acceleration workflows. gRPC and RESTful APIs for integration with external orchestration, monitoring, and scheduling systems. *Examples but not limited to This standards-based approach allows seamless interoperability between heterogeneous compute resources, enabling organizations to mix and match GPUs, FPGAs, ASICs, and custom accelerators in a single environment. As a result, applications can efficiently leverage the most appropriate hardware for each workload, without being constrained by vendor-specific architectures.
Is there support for hardware acceleration (e.g., GPU, FPGA, ASIC) via APIs?
Yes. The core architecture of the I/ONX platform is designed to support any accelerator from any OEM, all within the same rack. These accelerators are accessible and manageable through standardized APIs, providing a unified and flexible interface for orchestration and workload management. The I/ONX system includes a vendor-agnostic hardware abstraction layer that exposes compute, memory, and interconnect resources through APIs and frameworks such as: CUDA (NVIDIA) and ROCm (AMD) for GPU programming and management. OpenCL and SYCL for cross-platform accelerator compute. Vitis (Xilinx) and oneAPI (Intel) for FPGA and ASIC acceleration workflows. gRPC and RESTful APIs for integration with external orchestration, monitoring, and scheduling systems. *Examples but not limited to This standards-based approach allows seamless interoperability between heterogeneous compute resources, enabling organizations to mix and match GPUs, FPGAs, ASICs, and custom accelerators in a single environment. As a result, applications can efficiently leverage the most appropriate hardware for each workload, without being constrained by vendor-specific architectures.
Can inference workloads be triggered via RESTful APIs or gRPC?
Yes. Noting that these capabilities are a part of an upcoming release.
Can inference workloads be triggered via RESTful APIs or gRPC?
Yes. Noting that these capabilities are a part of an upcoming release.
Can inference workloads be triggered via RESTful APIs or gRPC?
Yes. Noting that these capabilities are a part of an upcoming release.
Is serverless inference supported natively or via a third-party platform (e.g., Knative, Kubeless)?
Not natively, but can be implemented if required. Knative would sit “above” our stack.
Is serverless inference supported natively or via a third-party platform (e.g., Knative, Kubeless)?
Not natively, but can be implemented if required. Knative would sit “above” our stack.
Is serverless inference supported natively or via a third-party platform (e.g., Knative, Kubeless)?
Not natively, but can be implemented if required. Knative would sit “above” our stack.
What triggers are supported for serverless inference (e.g., HTTP, Kafka, S3 events)?
Same as above, since serverless orchestration is not a native capability, trigger support would be determined by the chosen orchestration layer (e.g., Knative supports HTTP, Kafka, event sources, etc.).
What triggers are supported for serverless inference (e.g., HTTP, Kafka, S3 events)?
Same as above, since serverless orchestration is not a native capability, trigger support would be determined by the chosen orchestration layer (e.g., Knative supports HTTP, Kafka, event sources, etc.).
What triggers are supported for serverless inference (e.g., HTTP, Kafka, S3 events)?
Same as above, since serverless orchestration is not a native capability, trigger support would be determined by the chosen orchestration layer (e.g., Knative supports HTTP, Kafka, event sources, etc.).
How is resource allocation handled for serverless workloads—auto-scaling, cold start latency, etc.?
Same as above, auto-scaling semantics such as scale-to-zero or cold start optimizations would be provided by the orchestration layer, not our native platform. Our stack exposes the necessary hooks and elasticity to support it if such integration is desired.
How is resource allocation handled for serverless workloads—auto-scaling, cold start latency, etc.?
Same as above, auto-scaling semantics such as scale-to-zero or cold start optimizations would be provided by the orchestration layer, not our native platform. Our stack exposes the necessary hooks and elasticity to support it if such integration is desired.
How is resource allocation handled for serverless workloads—auto-scaling, cold start latency, etc.?
Same as above, auto-scaling semantics such as scale-to-zero or cold start optimizations would be provided by the orchestration layer, not our native platform. Our stack exposes the necessary hooks and elasticity to support it if such integration is desired.
Is there support for model versioning and rollback in serverless deployments?
Same position, model versioning would be controlled via the deployment/orchestration layer driving the serverless behavior. Our stack supports addressable artifacts, so version orchestration is feasible when layered with a compatible control plane.
Is there support for model versioning and rollback in serverless deployments?
Same position, model versioning would be controlled via the deployment/orchestration layer driving the serverless behavior. Our stack supports addressable artifacts, so version orchestration is feasible when layered with a compatible control plane.
Is there support for model versioning and rollback in serverless deployments?
Same position, model versioning would be controlled via the deployment/orchestration layer driving the serverless behavior. Our stack supports addressable artifacts, so version orchestration is feasible when layered with a compatible control plane.
What monitoring and logging capabilities are available for serverless inference jobs?
Again, telemetry is available at the platform/runtime level, but serverless-style job-level metrics, tracing, and per-invocation insights would be enabled through the chosen serverless controller (e.g., Knative Eventing + Observability stack).
What monitoring and logging capabilities are available for serverless inference jobs?
Again, telemetry is available at the platform/runtime level, but serverless-style job-level metrics, tracing, and per-invocation insights would be enabled through the chosen serverless controller (e.g., Knative Eventing + Observability stack).
What monitoring and logging capabilities are available for serverless inference jobs?
Again, telemetry is available at the platform/runtime level, but serverless-style job-level metrics, tracing, and per-invocation insights would be enabled through the chosen serverless controller (e.g., Knative Eventing + Observability stack).
Symphony's Heterogeneous Advantage
Symphony's Heterogeneous Advantage
Can you share any head-to-head results where orchestrating CPUs, GPUs, FPGAs, and ASICs together beat a comparable all-GPU setup in speed or efficiency?
Not in speed. Efficiency, yes, but that will depend on the definition. For example, with Agentic and RAG/CAG products accelerator utilization is very low. Using expensive accelerators in a common product like RAG/CAG is extremely inefficient if measured by cost-per-token and total-cost-of-ownership. More specifically, I/ONX has a product we call the Symphony Schumann that uses 8x AMD MI355X GPUs alongside 56x Tenstorrent Blackhole accelerators. The Blackhole accelerators run 250W compared to the 1,400W MI355 GPUs. The Blackholes run a broad range of LLMs and SLMs with performance close to that of an H100/H200 GPU (by NVIDIA). This enables significant power consumption savings - 64 MI355s would take ~100,000W compared to the same 64 accelerators on the Schumann requiring ~30,000W.
Can you share any head-to-head results where orchestrating CPUs, GPUs, FPGAs, and ASICs together beat a comparable all-GPU setup in speed or efficiency?
Not in speed. Efficiency, yes, but that will depend on the definition. For example, with Agentic and RAG/CAG products accelerator utilization is very low. Using expensive accelerators in a common product like RAG/CAG is extremely inefficient if measured by cost-per-token and total-cost-of-ownership. More specifically, I/ONX has a product we call the Symphony Schumann that uses 8x AMD MI355X GPUs alongside 56x Tenstorrent Blackhole accelerators. The Blackhole accelerators run 250W compared to the 1,400W MI355 GPUs. The Blackholes run a broad range of LLMs and SLMs with performance close to that of an H100/H200 GPU (by NVIDIA). This enables significant power consumption savings - 64 MI355s would take ~100,000W compared to the same 64 accelerators on the Schumann requiring ~30,000W.
Can you share any head-to-head results where orchestrating CPUs, GPUs, FPGAs, and ASICs together beat a comparable all-GPU setup in speed or efficiency?
Not in speed. Efficiency, yes, but that will depend on the definition. For example, with Agentic and RAG/CAG products accelerator utilization is very low. Using expensive accelerators in a common product like RAG/CAG is extremely inefficient if measured by cost-per-token and total-cost-of-ownership. More specifically, I/ONX has a product we call the Symphony Schumann that uses 8x AMD MI355X GPUs alongside 56x Tenstorrent Blackhole accelerators. The Blackhole accelerators run 250W compared to the 1,400W MI355 GPUs. The Blackholes run a broad range of LLMs and SLMs with performance close to that of an H100/H200 GPU (by NVIDIA). This enables significant power consumption savings - 64 MI355s would take ~100,000W compared to the same 64 accelerators on the Schumann requiring ~30,000W.
Product Stack
Product Stack
How does each product (Synth, Symphony, Canon) fit a specific use case, and how do they work together?
The I/ONX product ecosystem of Synth, Symphony, and Canon is designed to deliver a complete edge-to-core AI and HPC infrastructure. Each component serves a distinct purpose while operating within a unified architecture that supports heterogeneous compute and seamless data movement. Synth (Edge Compute): Synth is optimized for extreme edge environments where real-time AI processing and sensor analytics are required in remote, mobile, or disconnected conditions. It supports workloads such as oil and gas exploration, marine biology research, and aerospace/autonomous decision-making. Synth enables on-site inference, data filtering, and pre-processing, with the ability to store data locally or transmit it via the customer’s WAN. Symphony (Data Center Compute): Symphony serves as the core high-performance computing platform, built to handle AI model training, large-scale data analytics, and post-edge research workloads. It can host up to 64 heterogeneous accelerators per rack, allowing organizations to consolidate diverse GPU, FPGA, and ASIC hardware in a single environment for government, neo cloud, enterprise, and research compute environments. Canon (Ultra Low-Latency Storage): Canon provides the ultra-low latency storage layer for Symphony deployments. Designed for single-tenant or small multi-tenant workloads, Canon scales from 1 PB to 100 PB while delivering extreme read/write speeds that significantly shorten the time between data ingestion and usable results. This makes Canon ideal for AI training, simulation workloads, and real-time research analysis where latency directly impacts productivity. Integration Workflow: Together, Synth, Symphony, and Canon create a seamless edge-to-data-center continuum: Synth collects and processes data at the edge. Symphony performs large-scale compute and analysis once data is transmitted or synchronized. Canon accelerates storage access and data retrieval, feeding Symphony’s compute resources at unmatched speeds. This integrated architecture allows organizations to deploy, train, and optimize AI models across distributed environments, ensuring faster insights, greater operational flexibility, and full utilization of heterogeneous compute resources.
How does each product (Synth, Symphony, Canon) fit a specific use case, and how do they work together?
The I/ONX product ecosystem of Synth, Symphony, and Canon is designed to deliver a complete edge-to-core AI and HPC infrastructure. Each component serves a distinct purpose while operating within a unified architecture that supports heterogeneous compute and seamless data movement. Synth (Edge Compute): Synth is optimized for extreme edge environments where real-time AI processing and sensor analytics are required in remote, mobile, or disconnected conditions. It supports workloads such as oil and gas exploration, marine biology research, and aerospace/autonomous decision-making. Synth enables on-site inference, data filtering, and pre-processing, with the ability to store data locally or transmit it via the customer’s WAN. Symphony (Data Center Compute): Symphony serves as the core high-performance computing platform, built to handle AI model training, large-scale data analytics, and post-edge research workloads. It can host up to 64 heterogeneous accelerators per rack, allowing organizations to consolidate diverse GPU, FPGA, and ASIC hardware in a single environment for government, neo cloud, enterprise, and research compute environments. Canon (Ultra Low-Latency Storage): Canon provides the ultra-low latency storage layer for Symphony deployments. Designed for single-tenant or small multi-tenant workloads, Canon scales from 1 PB to 100 PB while delivering extreme read/write speeds that significantly shorten the time between data ingestion and usable results. This makes Canon ideal for AI training, simulation workloads, and real-time research analysis where latency directly impacts productivity. Integration Workflow: Together, Synth, Symphony, and Canon create a seamless edge-to-data-center continuum: Synth collects and processes data at the edge. Symphony performs large-scale compute and analysis once data is transmitted or synchronized. Canon accelerates storage access and data retrieval, feeding Symphony’s compute resources at unmatched speeds. This integrated architecture allows organizations to deploy, train, and optimize AI models across distributed environments, ensuring faster insights, greater operational flexibility, and full utilization of heterogeneous compute resources.
How does each product (Synth, Symphony, Canon) fit a specific use case, and how do they work together?
The I/ONX product ecosystem of Synth, Symphony, and Canon is designed to deliver a complete edge-to-core AI and HPC infrastructure. Each component serves a distinct purpose while operating within a unified architecture that supports heterogeneous compute and seamless data movement. Synth (Edge Compute): Synth is optimized for extreme edge environments where real-time AI processing and sensor analytics are required in remote, mobile, or disconnected conditions. It supports workloads such as oil and gas exploration, marine biology research, and aerospace/autonomous decision-making. Synth enables on-site inference, data filtering, and pre-processing, with the ability to store data locally or transmit it via the customer’s WAN. Symphony (Data Center Compute): Symphony serves as the core high-performance computing platform, built to handle AI model training, large-scale data analytics, and post-edge research workloads. It can host up to 64 heterogeneous accelerators per rack, allowing organizations to consolidate diverse GPU, FPGA, and ASIC hardware in a single environment for government, neo cloud, enterprise, and research compute environments. Canon (Ultra Low-Latency Storage): Canon provides the ultra-low latency storage layer for Symphony deployments. Designed for single-tenant or small multi-tenant workloads, Canon scales from 1 PB to 100 PB while delivering extreme read/write speeds that significantly shorten the time between data ingestion and usable results. This makes Canon ideal for AI training, simulation workloads, and real-time research analysis where latency directly impacts productivity. Integration Workflow: Together, Synth, Symphony, and Canon create a seamless edge-to-data-center continuum: Synth collects and processes data at the edge. Symphony performs large-scale compute and analysis once data is transmitted or synchronized. Canon accelerates storage access and data retrieval, feeding Symphony’s compute resources at unmatched speeds. This integrated architecture allows organizations to deploy, train, and optimize AI models across distributed environments, ensuring faster insights, greater operational flexibility, and full utilization of heterogeneous compute resources.
Are clients using Synth at the edge to prototype and Symphony in the data center for scale, and where does Canon come into play in this ecosystem?
Synth is currently being evaluated by multiple agencies, research institutions, and corporations for extreme edge computing applications, including oil exploration, marine life research, and in-flight decision-making on aerospace and autonomous platforms. These deployments run AI workloads and sensor processing in remote or disconnected environments, with data either stored locally or transmitted via Starlink or proprietary mesh networks for downstream analysis. After field operations, the collected data is ingested into the I/ONX Symphony platform within a data center environment for post-edge analysis, large-scale research, and AI model refinement. This architecture provides a seamless bridge between edge inference and centralized high-performance compute, enabling faster, more accurate results across distributed environments. Synth is in active evaluation with several prospects, and I/ONX anticipates a public announcement in 2025 regarding its first production deployment. Within this ecosystem, Canon functions as the extreme low-latency storage layer for the Symphony platform. Canon is a scalable storage solution that can start at 1 PB and expand up to 100 PB, delivering ultra-low latency performance that accelerates data access and processing that significantly reduces the time from workload initiation to result compared to traditional storage systems. Canon is optimized for low-bandwidth, single-tenant, or small multi-tenant environments, making it ideal for high-performance AI and research workloads.
Are clients using Synth at the edge to prototype and Symphony in the data center for scale, and where does Canon come into play in this ecosystem?
Synth is currently being evaluated by multiple agencies, research institutions, and corporations for extreme edge computing applications, including oil exploration, marine life research, and in-flight decision-making on aerospace and autonomous platforms. These deployments run AI workloads and sensor processing in remote or disconnected environments, with data either stored locally or transmitted via Starlink or proprietary mesh networks for downstream analysis. After field operations, the collected data is ingested into the I/ONX Symphony platform within a data center environment for post-edge analysis, large-scale research, and AI model refinement. This architecture provides a seamless bridge between edge inference and centralized high-performance compute, enabling faster, more accurate results across distributed environments. Synth is in active evaluation with several prospects, and I/ONX anticipates a public announcement in 2025 regarding its first production deployment. Within this ecosystem, Canon functions as the extreme low-latency storage layer for the Symphony platform. Canon is a scalable storage solution that can start at 1 PB and expand up to 100 PB, delivering ultra-low latency performance that accelerates data access and processing that significantly reduces the time from workload initiation to result compared to traditional storage systems. Canon is optimized for low-bandwidth, single-tenant, or small multi-tenant environments, making it ideal for high-performance AI and research workloads.
Are clients using Synth at the edge to prototype and Symphony in the data center for scale, and where does Canon come into play in this ecosystem?
Synth is currently being evaluated by multiple agencies, research institutions, and corporations for extreme edge computing applications, including oil exploration, marine life research, and in-flight decision-making on aerospace and autonomous platforms. These deployments run AI workloads and sensor processing in remote or disconnected environments, with data either stored locally or transmitted via Starlink or proprietary mesh networks for downstream analysis. After field operations, the collected data is ingested into the I/ONX Symphony platform within a data center environment for post-edge analysis, large-scale research, and AI model refinement. This architecture provides a seamless bridge between edge inference and centralized high-performance compute, enabling faster, more accurate results across distributed environments. Synth is in active evaluation with several prospects, and I/ONX anticipates a public announcement in 2025 regarding its first production deployment. Within this ecosystem, Canon functions as the extreme low-latency storage layer for the Symphony platform. Canon is a scalable storage solution that can start at 1 PB and expand up to 100 PB, delivering ultra-low latency performance that accelerates data access and processing that significantly reduces the time from workload initiation to result compared to traditional storage systems. Canon is optimized for low-bandwidth, single-tenant, or small multi-tenant environments, making it ideal for high-performance AI and research workloads.
Software & Developer Adoption
Software & Developer Adoption
What is I/ONX doing on the software side - such as your Conductor orchestration layer or other tools - to make it plug-and-play for Al teams?
I/ONX’s Conductor orchestration layer is designed to make the platform completely plug-and-play for AI and data science teams, requiring no code changes or workflow modifications. Conductor provides a transparent orchestration layer that allows AI teams to run workloads Pythonically, using their preferred frameworks, tools, and development environments exactly as they do today. The goal is for AI practitioners to remain agnostic to the underlying infrastructure, while Conductor automatically handles resource scheduling, accelerator allocation, and workload distribution across heterogeneous hardware. This ensures optimal performance and scalability without additional complexity. For advanced users, Conductor also provides a comprehensive API that enables fine-grained control, custom optimization, and integration with third-party orchestration or MLOps tools. This dual approach—plug-and-play simplicity for most teams and deep configurability for power users—ensures the I/ONX platform supports both rapid prototyping and enterprise-scale AI operations. In parallel, the I/ONX team is developing a Graphical User Interface (GUI) that will further enhance usability and accessibility. The GUI will introduce expanded features, integrations, and simplified operability, enabling enterprises without large technical teams to fully leverage the benefits of heterogeneous compute on the I/ONX platform. The GUI project is currently in progress and is targeted for completion in late 2025.
What is I/ONX doing on the software side - such as your Conductor orchestration layer or other tools - to make it plug-and-play for Al teams?
I/ONX’s Conductor orchestration layer is designed to make the platform completely plug-and-play for AI and data science teams, requiring no code changes or workflow modifications. Conductor provides a transparent orchestration layer that allows AI teams to run workloads Pythonically, using their preferred frameworks, tools, and development environments exactly as they do today. The goal is for AI practitioners to remain agnostic to the underlying infrastructure, while Conductor automatically handles resource scheduling, accelerator allocation, and workload distribution across heterogeneous hardware. This ensures optimal performance and scalability without additional complexity. For advanced users, Conductor also provides a comprehensive API that enables fine-grained control, custom optimization, and integration with third-party orchestration or MLOps tools. This dual approach—plug-and-play simplicity for most teams and deep configurability for power users—ensures the I/ONX platform supports both rapid prototyping and enterprise-scale AI operations. In parallel, the I/ONX team is developing a Graphical User Interface (GUI) that will further enhance usability and accessibility. The GUI will introduce expanded features, integrations, and simplified operability, enabling enterprises without large technical teams to fully leverage the benefits of heterogeneous compute on the I/ONX platform. The GUI project is currently in progress and is targeted for completion in late 2025.
What is I/ONX doing on the software side - such as your Conductor orchestration layer or other tools - to make it plug-and-play for Al teams?
I/ONX’s Conductor orchestration layer is designed to make the platform completely plug-and-play for AI and data science teams, requiring no code changes or workflow modifications. Conductor provides a transparent orchestration layer that allows AI teams to run workloads Pythonically, using their preferred frameworks, tools, and development environments exactly as they do today. The goal is for AI practitioners to remain agnostic to the underlying infrastructure, while Conductor automatically handles resource scheduling, accelerator allocation, and workload distribution across heterogeneous hardware. This ensures optimal performance and scalability without additional complexity. For advanced users, Conductor also provides a comprehensive API that enables fine-grained control, custom optimization, and integration with third-party orchestration or MLOps tools. This dual approach—plug-and-play simplicity for most teams and deep configurability for power users—ensures the I/ONX platform supports both rapid prototyping and enterprise-scale AI operations. In parallel, the I/ONX team is developing a Graphical User Interface (GUI) that will further enhance usability and accessibility. The GUI will introduce expanded features, integrations, and simplified operability, enabling enterprises without large technical teams to fully leverage the benefits of heterogeneous compute on the I/ONX platform. The GUI project is currently in progress and is targeted for completion in late 2025.
Can data scientists run TensorFlow/PyTorch models on your platform out-of-the-box, or do they need to refactor code in Rust to leverage your system's full benefits?
Yes. When running single accelerator class workloads, standard frameworks “see” the accelerators on the system over our PCIe fabric. When orchestrating across multiple classes of accelerators in the context of Tensor/Data/Hybrid parallelism training, our Rust-based ecosystem is a powerful enabler where multiple compilers and other complexities are simplified. In the context of Agentic or RAG/CAG flows, we have a Terraform -> OpenStack -> LXD ecosystem that enables routing to processors both in the rack (across our PCIe fabric) or across a local network (Ethernet). This is an optional capability our customers can use if they do not have a preferred orchestration platform.
Can data scientists run TensorFlow/PyTorch models on your platform out-of-the-box, or do they need to refactor code in Rust to leverage your system's full benefits?
Yes. When running single accelerator class workloads, standard frameworks “see” the accelerators on the system over our PCIe fabric. When orchestrating across multiple classes of accelerators in the context of Tensor/Data/Hybrid parallelism training, our Rust-based ecosystem is a powerful enabler where multiple compilers and other complexities are simplified. In the context of Agentic or RAG/CAG flows, we have a Terraform -> OpenStack -> LXD ecosystem that enables routing to processors both in the rack (across our PCIe fabric) or across a local network (Ethernet). This is an optional capability our customers can use if they do not have a preferred orchestration platform.
Can data scientists run TensorFlow/PyTorch models on your platform out-of-the-box, or do they need to refactor code in Rust to leverage your system's full benefits?
Yes. When running single accelerator class workloads, standard frameworks “see” the accelerators on the system over our PCIe fabric. When orchestrating across multiple classes of accelerators in the context of Tensor/Data/Hybrid parallelism training, our Rust-based ecosystem is a powerful enabler where multiple compilers and other complexities are simplified. In the context of Agentic or RAG/CAG flows, we have a Terraform -> OpenStack -> LXD ecosystem that enables routing to processors both in the rack (across our PCIe fabric) or across a local network (Ethernet). This is an optional capability our customers can use if they do not have a preferred orchestration platform.
Competition and Defensibility
Competition and Defensibility
What prevents competition from achieving something similar to I/ONX's solution?
I/ONX has established significant competitive barriers through a combination of protected intellectual property, proprietary hardware design, and advanced orchestration software that has been in development for several years. While many have attempted to achieve true heterogeneous compute integration, these efforts have generally fallen short due to architectural complexity and hardware compatibility limitations. I/ONX is the only platform to successfully unify multiple accelerator types and OEM technologies within a single rack, and this advantage is secured by patented IP and purpose-built system engineering. Our multi-year head start in both hardware and software innovation provides a durable technological lead that is difficult to replicate without similar R&D depth and IP foundation.
What prevents competition from achieving something similar to I/ONX's solution?
I/ONX has established significant competitive barriers through a combination of protected intellectual property, proprietary hardware design, and advanced orchestration software that has been in development for several years. While many have attempted to achieve true heterogeneous compute integration, these efforts have generally fallen short due to architectural complexity and hardware compatibility limitations. I/ONX is the only platform to successfully unify multiple accelerator types and OEM technologies within a single rack, and this advantage is secured by patented IP and purpose-built system engineering. Our multi-year head start in both hardware and software innovation provides a durable technological lead that is difficult to replicate without similar R&D depth and IP foundation.
What prevents competition from achieving something similar to I/ONX's solution?
I/ONX has established significant competitive barriers through a combination of protected intellectual property, proprietary hardware design, and advanced orchestration software that has been in development for several years. While many have attempted to achieve true heterogeneous compute integration, these efforts have generally fallen short due to architectural complexity and hardware compatibility limitations. I/ONX is the only platform to successfully unify multiple accelerator types and OEM technologies within a single rack, and this advantage is secured by patented IP and purpose-built system engineering. Our multi-year head start in both hardware and software innovation provides a durable technological lead that is difficult to replicate without similar R&D depth and IP foundation.
Have you secured any patents around your architecture or software that give you a durable advantage?
Yes. I/ONX has secured intellectual property protections around key aspects of its system architecture, orchestration framework, and heterogeneous compute integration methods. These protections provide a durable competitive advantage and help safeguard the company’s innovations in multi-accelerator orchestration and vendor-agnostic system design. Additional patents and filings are in progress as I/ONX continues to expand its technology portfolio and strengthen its IP position.
Have you secured any patents around your architecture or software that give you a durable advantage?
Yes. I/ONX has secured intellectual property protections around key aspects of its system architecture, orchestration framework, and heterogeneous compute integration methods. These protections provide a durable competitive advantage and help safeguard the company’s innovations in multi-accelerator orchestration and vendor-agnostic system design. Additional patents and filings are in progress as I/ONX continues to expand its technology portfolio and strengthen its IP position.
Have you secured any patents around your architecture or software that give you a durable advantage?
Yes. I/ONX has secured intellectual property protections around key aspects of its system architecture, orchestration framework, and heterogeneous compute integration methods. These protections provide a durable competitive advantage and help safeguard the company’s innovations in multi-accelerator orchestration and vendor-agnostic system design. Additional patents and filings are in progress as I/ONX continues to expand its technology portfolio and strengthen its IP position.
How do you plan to stay ahead if giants or well-funded startups target the same heterogeneous orchestration space?
I/ONX maintains a multi-year head start in the heterogeneous orchestration space, supported by protected intellectual property, deep technical expertise, and a flexible, vendor-agnostic architecture that few competitors can match. Our strategy is rooted in continuous innovation, rapid iteration, and close collaboration with OEMs, integrators, and research partners, allowing us to evolve faster than larger, less agile organizations. This approach ensures that I/ONX remains ahead of the market curve long enough to achieve broad adoption and market saturation, while continuing to expand the technology’s capabilities and ecosystem support. Further, I/ONX has been selected as the standards authority for a nation-state where multi-national reciprocity is being established. This gives I/ONX a unique opportunity to extract R&D in our labs and directly create standards that I/ONX can meet prior to competitors having visibility or access to those standards and certification facilities.
How do you plan to stay ahead if giants or well-funded startups target the same heterogeneous orchestration space?
I/ONX maintains a multi-year head start in the heterogeneous orchestration space, supported by protected intellectual property, deep technical expertise, and a flexible, vendor-agnostic architecture that few competitors can match. Our strategy is rooted in continuous innovation, rapid iteration, and close collaboration with OEMs, integrators, and research partners, allowing us to evolve faster than larger, less agile organizations. This approach ensures that I/ONX remains ahead of the market curve long enough to achieve broad adoption and market saturation, while continuing to expand the technology’s capabilities and ecosystem support. Further, I/ONX has been selected as the standards authority for a nation-state where multi-national reciprocity is being established. This gives I/ONX a unique opportunity to extract R&D in our labs and directly create standards that I/ONX can meet prior to competitors having visibility or access to those standards and certification facilities.
How do you plan to stay ahead if giants or well-funded startups target the same heterogeneous orchestration space?
I/ONX maintains a multi-year head start in the heterogeneous orchestration space, supported by protected intellectual property, deep technical expertise, and a flexible, vendor-agnostic architecture that few competitors can match. Our strategy is rooted in continuous innovation, rapid iteration, and close collaboration with OEMs, integrators, and research partners, allowing us to evolve faster than larger, less agile organizations. This approach ensures that I/ONX remains ahead of the market curve long enough to achieve broad adoption and market saturation, while continuing to expand the technology’s capabilities and ecosystem support. Further, I/ONX has been selected as the standards authority for a nation-state where multi-national reciprocity is being established. This gives I/ONX a unique opportunity to extract R&D in our labs and directly create standards that I/ONX can meet prior to competitors having visibility or access to those standards and certification facilities.
Commercial Traction & Scalability
Commercial Traction & Scalability
Is your approach product-ready and translating into revenues today?
Yes. The I/ONX platform is fully production-ready and is currently being actively marketed across multiple sectors. Several opportunities are in the final stages of the sales cycle, including underwriting, purchase agreement negotiations, and fund deployment scheduling on signed LOIs. While I/ONX is in the late pre-revenue phase, the company is on track to generate revenue within 2025.
Is your approach product-ready and translating into revenues today?
Yes. The I/ONX platform is fully production-ready and is currently being actively marketed across multiple sectors. Several opportunities are in the final stages of the sales cycle, including underwriting, purchase agreement negotiations, and fund deployment scheduling on signed LOIs. While I/ONX is in the late pre-revenue phase, the company is on track to generate revenue within 2025.
Is your approach product-ready and translating into revenues today?
Yes. The I/ONX platform is fully production-ready and is currently being actively marketed across multiple sectors. Several opportunities are in the final stages of the sales cycle, including underwriting, purchase agreement negotiations, and fund deployment scheduling on signed LOIs. While I/ONX is in the late pre-revenue phase, the company is on track to generate revenue within 2025.
Can you tell us about initial deployments - are they paid pilots or full production installs, and in which industries or use-cases?
As of October 27, 2025, I/ONX is pre-revenue but has completed third-party validation of its platform through Slalom Consulting, confirming performance and scalability across heterogeneous compute environments. To date, I/ONX has executed several proof-of-concept (POC) with prospective customers. While early engagements were unpaid POCs for validation purposes, all future POCs will be structured as paid engagements to reflect the platform’s maturity and enterprise readiness. In 2025, I/ONX is in the final stages of closing two paid customer deployments, one with a nationally recognized research organization and another with a Division I university. Additionally, I/ONX is in the final stages of two lab deployment deals proposed each involving data sovereignty requirements that necessitate region-specific infrastructure from two different nation states.
Can you tell us about initial deployments - are they paid pilots or full production installs, and in which industries or use-cases?
As of October 27, 2025, I/ONX is pre-revenue but has completed third-party validation of its platform through Slalom Consulting, confirming performance and scalability across heterogeneous compute environments. To date, I/ONX has executed several proof-of-concept (POC) with prospective customers. While early engagements were unpaid POCs for validation purposes, all future POCs will be structured as paid engagements to reflect the platform’s maturity and enterprise readiness. In 2025, I/ONX is in the final stages of closing two paid customer deployments, one with a nationally recognized research organization and another with a Division I university. Additionally, I/ONX is in the final stages of two lab deployment deals proposed each involving data sovereignty requirements that necessitate region-specific infrastructure from two different nation states.
Can you tell us about initial deployments - are they paid pilots or full production installs, and in which industries or use-cases?
As of October 27, 2025, I/ONX is pre-revenue but has completed third-party validation of its platform through Slalom Consulting, confirming performance and scalability across heterogeneous compute environments. To date, I/ONX has executed several proof-of-concept (POC) with prospective customers. While early engagements were unpaid POCs for validation purposes, all future POCs will be structured as paid engagements to reflect the platform’s maturity and enterprise readiness. In 2025, I/ONX is in the final stages of closing two paid customer deployments, one with a nationally recognized research organization and another with a Division I university. Additionally, I/ONX is in the final stages of two lab deployment deals proposed each involving data sovereignty requirements that necessitate region-specific infrastructure from two different nation states.
How are you gearing up for scale if demand rises (for example, ensuring supply chain capacity with your U.S.-based manufacturing plan)?
I/ONX has established a diverse and resilient supply chain strategy to ensure scalability and continuity as demand increases. We have formal partnerships with multiple OEMs, system assemblers, VARs, integrators, and chip manufacturers, enabling us to pivot quickly in response to market or supply constraints. To maintain readiness, we communicate forecasted demand and production requirements across our partner network to proactively reserve manufacturing capacity and component availability. A key advantage of the I/ONX platform is its vendor-agnostic architecture, which ensures the system is not tied to any single manufacturer or region. The platform supports any accelerator or chip from any global supplier, eliminating dependency on a single source (for example, a single manufacturer in Taiwan). This design enables consistent scalability, supply flexibility, and operational resilience across both U.S. and international deployments.
How are you gearing up for scale if demand rises (for example, ensuring supply chain capacity with your U.S.-based manufacturing plan)?
I/ONX has established a diverse and resilient supply chain strategy to ensure scalability and continuity as demand increases. We have formal partnerships with multiple OEMs, system assemblers, VARs, integrators, and chip manufacturers, enabling us to pivot quickly in response to market or supply constraints. To maintain readiness, we communicate forecasted demand and production requirements across our partner network to proactively reserve manufacturing capacity and component availability. A key advantage of the I/ONX platform is its vendor-agnostic architecture, which ensures the system is not tied to any single manufacturer or region. The platform supports any accelerator or chip from any global supplier, eliminating dependency on a single source (for example, a single manufacturer in Taiwan). This design enables consistent scalability, supply flexibility, and operational resilience across both U.S. and international deployments.
How are you gearing up for scale if demand rises (for example, ensuring supply chain capacity with your U.S.-based manufacturing plan)?
I/ONX has established a diverse and resilient supply chain strategy to ensure scalability and continuity as demand increases. We have formal partnerships with multiple OEMs, system assemblers, VARs, integrators, and chip manufacturers, enabling us to pivot quickly in response to market or supply constraints. To maintain readiness, we communicate forecasted demand and production requirements across our partner network to proactively reserve manufacturing capacity and component availability. A key advantage of the I/ONX platform is its vendor-agnostic architecture, which ensures the system is not tied to any single manufacturer or region. The platform supports any accelerator or chip from any global supplier, eliminating dependency on a single source (for example, a single manufacturer in Taiwan). This design enables consistent scalability, supply flexibility, and operational resilience across both U.S. and international deployments.
Contact Us & Learn More
I/ONX HPC - © 2025 - All Rights Reserved