LEADERG AI Cluster server,Can use NVIDIA GPUs at the same time,Intel VPU, Intel CPU And other arithmetic acceleration devices. Using network distributed computing and scalable architecture design, tens of thousands of inference requests can be processed in one second.
The software uses OpenR8 AI software, drastically reduce deep learning network training and the technical threshold of inference.The software is optimized, significantly improve server inference performance.
[ How it works ]
1. Client side inferring demand through the network Http post proposed to the AI Portal Server.
2.AI Portal Server which AI inference device is available, will infer the demand to the AI inference device through the network http post.
3.After the AI inference device receives the inference request, if the deep learning model is different, request a deep learning model from the AI Portal Server via the web http post.
4.After the AI inference device is inferred, the results are passed back to the AI Portal Server in JSON format.
5.The AI Portal Server passes the results back to the client in JSON format.
[ Suitable industry ]
Wafer, Sealing test, Panel, Copper foil, PCB, Device, Security control and so on.
[ Ordering method ]
If you need AI Cluster server, please go to the following website to buy: