• Home
  • Blog
  • A 9th-Grade Internship, a CT Scan, and an AI That Learned to Tell Sugar from Salt

A 9th-Grade Internship, a CT Scan, and an AI That Learned to Tell Sugar from Salt

February 12, 2026 | Dr. Daniel Stickler

During a recent three-week student internship at Comet Yxlon, I had the chance to mentor a 9th-grader who was curious, motivated—and (for good reasons) not allowed to operate X-ray equipment independently. So we designed a project that would still feel real: hands-on data, real analysis, and a clear outcome. The goal was to let her work independently in software while learning the basics of X-ray imaging and computed tomography.

From “What is X-ray?” to “Let’s train a neural network”

We started with fundamentals: X-ray physics at a high level, key components (source, detector, geometry), and what makes CT different from 2D imaging. For a 15-year-old, this is far beyond typical school content—so we kept it visual, practical, and anchored in examples.
Then came the experiment.

The “Sugar vs. Salt” CT challenge

We scanned a simple sample on an FF35CT: a plastic drinking straw filled halfway with sugar and halfway with salt. In the raw projections, the difference was already visible: carbon-rich sugar vs. the higher density of NaCl. The interesting part happened after we shook the straw thoroughly to mix the grains. In 2D, the mixture became hard to separate—only small hints remained at edges.

Raw 2D X-ray projection from the FF35 CT. With the salt and sugar clearly seperated

Raw 2D X-ray projection from the FF35 CT. Now with the salt and sugar mixed. The differences are now harder to seperate visually. 

In the reconstructed 3D volume, separation was conceptually easier… but still not trivial. 


Why thresholding wasn’t enough

A simple threshold could segment salt reasonably well, but sugar was much harder:

  • sugar appeared weaker and partially blended with air and mild beam-hardening artifacts
  • there were also plastic particles with similar intensity that confused the segmentation

So we moved beyond manual segmentation.

Visualisation of the 3D reconstructed volume in Dragonfly 3D World, with thresholding applied to seperate the salt particles. 

Visualisation of the 3D reconstructed volume in Dragonfly 3D World,  now with segmentation applied to seperate the salt particles. 

Teaching AI to do the job

Using the Segmentation Wizard in Dragonfly 3D World, we trained a neural network to classify the volume. We iterated the label strategy:

  1. salt vs. sugar, with “plastic/particles/air” as a third class
  2. then refined further by separating plastic and particles into additional classes

The final result is genuinely impressive—especially considering the short time frame and the intern’s age. (The visuals speak for themselves)

before segmentation after segmentation

A practical note on speed vs. detail

To keep iteration fast and make the workflow smooth, we scanned with 2×2 binning, and then applied 2×2×2 binning in the reconstructed volume. That’s why some surfaces look a bit stepped—but it was the right tradeoff to accelerate learning and experimentation.

Main Takeaways: 

What I enjoyed most was seeing how quickly enthusiasm grows when students get meaningful tasks—real tools, real problems, real results. Pairing X-ray/CT with AI segmentation and machine learning turned the internship into something far more engaging than “watching over someone’s shoulder.”

Next up: we also worked on a second topic—3D printing + CT for deviation (as-built vs. CAD) analysis. I’ll share that in a follow-up blog. 

3D reconstruction of the volume, and several slices. Before segmentation and visualised in Dragonfly 3D World. 

3D reconstruction of the volume, and several slices. After segmentation and with the classes for air, plastic and particles removed. Visualised in Dragonfly 3D World. 

Latest Posts

More Power Density, Faster Inspection, and Improved Resolution with the New High-Density Power (HDP) Target for the 225 kV Microfocus Tube

November 18, 2025

The Comet Yxlon FF35 CT and FF85 CT are well known for their dual-tube design, flexibility, and precision. These advanced systems are used across a broad range of applications—from semiconductor packaging and additive manufacturing to automotive components and materials research—where capturing internal details with speed and accuracy is critical. Now, these systems have reached an exciting new milestone. The 225 kV microfocus directional beam tube, available in both systems, has been upgraded with a new High-Density Power (HDP) Target—delivering a twofold increase in power density while maintaining the same form factor and water-cooled focal-spot stability. The result: faster inspections, finer image detail, and longer-lasting performance.

View more

Void Inspection of Complex Assemblies using Computed Laminography

June 02, 2024 | Jeff Urbanski, Peter Koch

Electronics manufacturing companies face the challenge of precise porosity evaluation inside their products. Peter Koch and Jeff Urbanski show new analysis methods working together for more accurate void measurement at critical interfaces.

View more

Advanced CT Imaging of a Marten Skull

October 05, 2023 | Gina Naujokat

With the Comet Yxlon FF35 CT inspection system and advanced Vista features, you just choose the resolution you need - quick and simple. You obtain prime scan results by saving prescious time. See how it works.

View more