Taichi Blogs
Pythonic Supercomputing: Scaling Taichi Programs with MPI4Py
Nvidia unveiled its Tesla V100 GPU accelerator, which has since become a must-have model for deep learning, at GTC (GPU Technology Conference) 2017 in Beijing. It was on the same occasion that Jensen Huang, Nvidia's CEO, solemnly gave us the most sincere advice, which kept resonating in our heads for years to come:
Taichi's Quantized Data Types: Same Computational Code, Optimized GPU Memory Usage
Starting from v1.1.0, Taichi provides quantized data types. But why is quantization important, especially in scenarios where Taichi stands out, such as physical simulation? This blog demonstrates how this new feature reduces your GPU memory usage significantly and requires zero change to your computational code.
How Taichi Fuels GPU-accelerated Image Processing: A Beginner to Expert Guide
GPU-accelerated image processing tutorial
Taichi AOT, the solution for deploying kernels in mobile devices
Physical simulation, which Taichi Lang is best at, has wide applications on mobile devices, such as real-time physically-based interactions in mobile games or cool visual effects in short videos. This is thanks to Taichi's features such as fast prototyping and cross-platform GPU acceleration.
AST refactoring
In the previous blog post, we mentioned this sentence, which is a part of the zen of Python. In this post, we will show you how we simplified the code of Taichi.
Subscribe to our updates
Get the latest news from the Taichi Lang community in a monthly email: Groundbreaking releases, upcoming events, new insights, community updates, and more!