Deadline: 15th of April, 2025
This iteration received 29 applications. The following projects were awarded GPU credit.
Sophisticated CNNs will classify 14 000 dermoscopic images into the seven standard skin‑cancer diagnoses. GPU credits let the team train, tune and deploy a high‑accuracy model that could accelerate melanoma screening and aid clinicians worldwide.
Builds a custom diffusion model to transform grayscale Synthetic Aperture Radar data into realistic optical‑style images, giving scientists an intuitive view where photography is impossible. High‑VRAM GPUs power diffusion training and possible custom CLIP work, culminating in an open‑source tool.
A student team is tackling Alzheimer’s MRI, ECG‑based heart‑disease detection and stroke prediction on million‑record datasets. Deep networks run 50–500 epochs; daily experiments outstrip local hardware. GPU credits will cut each training cycle from hours to minutes and speed three forthcoming papers.
GPUs sieve primes up to 1012, spot gap “constellations” (twins, quadruplets, …) and map each pattern to notes or chords, generating a 30‑second audio piece plus statistics on constellation frequencies. The project turns number theory into music you can hear.
Aims to bring AlphaFold‑style advances to RNA by assembling augmented secondary‑structure data and testing new deep architectures that work despite sparse alignments. GPU power enables large‑scale experiments toward accurate 2‑D and 3‑D RNA folding, unlocking insights into gene regulation and therapeutics.
Optimizes a separable‑CNN to spot eccentric binary‑black‑hole merger signatures in LIGO images, reducing detection latency for future alerts. Training on 450 k simulated‑waveform images with CUDA‑accelerated TensorFlow demands significant GPU capacity.
Creates an intelligent image-search engine for a large Google Drive design library. CLIP embeddings are stored in a vector database and served via web UI, enabling queries like “red abstract pattern” without manual tags. GPUs accelerate bulk embedding generation and future collection updates.
Builds a hybrid supervised‑plus‑unsupervised deep‑learning engine that learns normal network behaviour and flags both known and novel cyber‑attacks in real time. Designed for cloud, enterprise and IoT networks, it adapts continuously to new threat patterns, beating static rule‑based IDS by cutting response times and preventing breaches.
Zoltan's FLOPs is a mini-grant program for GPU computing projects. The goal is to provide small-scale funding to help researchers and developers access GPU compute time on modern hardware.
Total grant budget this iteration: $5,000.
Apply here.
Applications closed. Watch this site for results and future iterations.
I grew up in an era when computing was fairly democratic.
A kid in a poor, post-communist country with a used 486 could pick up coding and create programs just like anyone with better means. Today, computing is becoming less accessible to the masses. AI and machine learning require expensive hardware and significant energy resources.
Even top universities struggle to keep up, falling behind for-profit organizations. Students and researchers find it difficult to get GPU time for their projects. Talented kids interested in AI may never get the compute they need.
To offset the impact of the GPU usage, 280kg of CO2 will be permanently removed using Climeworks. My estimate of the computational carbon footprint is based on [1].
I am a programmer & entrepreneur based in Barcelona, Spain. LinkedIn