AI Benchmark

4,4
1,54 hilj. recenzija
100 hilj.+
Preuzimanja
Kategorizacija sadržaja
Svako
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana
Slika snimka ekrana

O aplikaciji

Neural Image Generation, Face Recognition, Image Classification, Question Answering...

Is your smartphone capable of running the latest Deep Neural Networks to perform these and many other AI-based tasks? Does it have a dedicated AI Chip? Is it fast enough? Run AI Benchmark to professionally evaluate its AI Performance!

Current phone ranking: http://ai-benchmark.com/ranking

AI Benchmark measures the speed, accuracy, power consumption and memory requirements for several key AI, Computer Vision and NLP models. Among the tested solutions are Image Classification and Face Recognition methods, AI models performing neural image and text generation, neural networks used for Image / Video Super-Resolution and Photo Enhancement, as well as AI solutions used in autonomous driving systems and smartphones for real-time Depth Estimation and Semantic Image Segmentation. The visualization of the algorithms’ outputs allows to assess their results graphically and to get to know the current state-of-the-art in various AI fields.

In total, AI Benchmark consists of 83 tests and 30 sections listed below:

Section 1. Classification, MobileNet-V3
Section 2. Classification, Inception-V3
Section 3. Face Recognition, Swin Transformer
Section 4. Classification, EfficientNet-B4
Section 5. Classification, MobileViT-V2
Sections 6/7. Parallel Model Execution, 8 x Inception-V3
Section 8. Object Tracking, YOLO-V8
Section 9. Optical Character Recognition, ViT Transformer
Section 10. Semantic Segmentation, DeepLabV3+
Section 11. Parallel Segmentation, 2 x DeepLabV3+
Section 12. Semantic Segmentation, Segment Anything
Section 13. Photo Deblurring, IMDN
Section 14. Image Super-Resolution, ESRGAN
Section 15. Image Super-Resolution, SRGAN
Section 16. Image Denoising, U-Net
Section 17. Depth Estimation, MV3-Depth
Section 18. Depth Estimation, MiDaS 3.1
Section 19/20. Image Enhancement, DPED
Section 21. Learned Camera ISP, MicroISP
Section 22. Bokeh Effect Rendering, PyNET-V2 Mobile
Section 23. FullHD Video Super-Resolution, XLSR
Section 24/25. 4K Video Super-Resolution, VideoSR
Section 26. Question Answering, MobileBERT
Section 27. Neural Text Generation, Llama2
Section 28. Neural Text Generation, GPT2
Section 29. Neural Image Generation, Stable Diffusion V1.5
Section 30. Memory Limits, ResNet

Besides that, one can load and test their own TensorFlow Lite deep learning models in the PRO Mode.

A detailed description of the tests can be found here: http://ai-benchmark.com/tests.html

Note: Hardware acceleration is supported on all mobile SoCs with dedicated NPUs and AI accelerators, including Qualcomm Snapdragon, MediaTek Dimensity / Helio, Google Tensor, HiSilicon Kirin, Samsung Exynos, and UNISOC Tiger chipsets. Starting from AI Benchmark v4, one can also enable GPU-based AI acceleration on older devices in the settings ("Accelerate" -> "Enable GPU Acceleration" / "Arm NN", OpenGL ES-3.0+ is required).
Ažurirano dana
25. sep 2024.

Sigurnost podataka

Sigurnost počinje razumijevanjem na koji način programeri prikupljaju i dijele vaše podatke. Privatnost podataka i sigurnosne prakse se mogu razlikovati ovisno o korištenju, regiji i dobi. Programer je naveo ove informacije i može ih s vremenom ažurirati.
Podaci se ne dijele s trećim stranama
Saznajte više o načinu na koji programeri pružaju izjavu o dijeljenju
Podaci se ne prikupljaju
Saznajte više o načinu na koji programeri pružaju izjavu o prikupljanju

Ocjene i recenzije

4,4
1,48 hilj. recenzije

Šta ima novo

1. New tasks and models: Vision Transformer (ViT) architectures, Large Language Models (LLMs), Stable Diffusion network, etc.
2. Added tests checking the performance of quantized INT16 inference.
3. LiteRT (TFLite) runtime updated to version 2.17.
4. Updated Qualcomm QNN, MediaTek Neuron, TFLite NNAPI, GPU and Hexagon NN delegates.
5. Added Arm NN delegate for AI inference acceleration on Mali GPUs.
6. The total number of tests increased to 83.