You are viewing archived content (2011-2018). For current research, visit research.colfax-intl.com

MC² 004: Signal Processing in a Physics Experiment

Interested in this webinar? See more webinars like this.

Speaker

Prof. Jeffrey S. DunhamProf. Jeffrey S. Dunham, Professor of Physics, Middlebury College

Prof. Jeffrey S. Dunham has taught physics for 34 years at Middlebury College in Middlebury, Vermont, where he is now William R. Kenan Jr. Professor of Natural Sciences. He currently conducts experimental research in nonlinear dynamics. He is using HPC techniques at the workstation level to analyze large data sets from experiments that can be performed in a small-college laboratory. He received the Bachelor of Science degree in physics from the University of Washington in 1975 and the Ph.D. in physics from Stanford University in 1981.

Presentation

Savitzky-Golay Filter Algorithm for Large One-Dimensional Data Sets

A chaotic pendulum experiment in our laboratory performs about 275 million digitized angle measurements in a 24-hour day. A Poincaré plot of the raw data shows significant and unacceptable discretization effects from the optical rotary encoder used to measure angle. The raw data is therefore passed through a Savitzky-Golay (SG) filter algorithm that smooths the raw data. The filter produces about 275 million smoothed values of angular position, velocity, and acceleration that can be used to produce a Poincaré plot of exceptional quality. The two SG filter parameters, the filter radius and the polynomial order, are not known a priori, so it is necessary to perform sweeps of these parameters over large data sets. In addition, we determine fractal dimension and Lyapunov exponents for these data sets. The need to perform repetitive passes over many days of data is the primary reason for our interest in Knight’s Landing (KNL). The SG filter algorithm is “embarassingly parallel” and therefore one should expect that its performance should approach the maximum possible for an algorithm on KNL. In anticipation of excellent performance for the SG filter algorithm, we made a preliminary series of measurements that verifies that KNL can actually deliver 3 TFLOP/s double precision and 6 TFLOP/s single precision compute performance as advertised.

Recording

Slides:

 Colfax-MC2Series-004-Jeffrey-Dunham.pdf (5 MB)

Code: http://sites.middlebury.edu/dunham/code/

Video: this webinar aired July 11, 2017

 

 

Editorial: Big Datasets from Small Experiments

Modern experiments produce lots of data. Abundant data is not exclusive to the “big gun” institutions, such as observatories and particle colliders. It is also the norm in modest-size labs working on anything from genomics to microscopy. Even outside of science you don’t need to go far to find lots of data. With the Internet-of-Things (IoT) at play, a modern smart home is a continuous source of big datasets! With data collection as easy as it is, how does one analyze the data efficiently?

The work of Prof. Jeffrey Dunham connects real-world phenomena to data collection to computing in a very pure experiment. He has built a tabletop-scale chaotic pendulum equipped with a high-precision rotary encoder. The pendulum produces hundreds of gigabytes of data per day. This data reveals the strange attractor of the pendulum, which is a fractal. This manifestation of “order in chaos” is not only a thing of beauty. It has roots in chaos theory, which also applies to climate studies, biology, cryptography, and technology. However, the amazing fractal structure of the data emerges only with proper post-processing. “Proper” means that the experimenter must scan a parameter space of the Savitzky-Golay filter. For each point, the computationally expensive filter must be applied to the entire dataset. For good science in this experiment, computational performance is paramount.

In his presentation in Modern Code Contributed talks (“MC² Series”), Prof. Dunham shares his experience with this computational challenge. He talks about the modern code practices that allowed him to shrink the data processing time from hours to fractions of a second. That was made possible through two factors. The first one is the usage of an Intel® Xeon Phi™ processor (formerly Knights Landing). The second one is a thoughtful approach to parallel programming. Prof. Dunham also talks about probing the peak performance of these processors, the roofline model, and the importance of vector arithmetics.

Tune into the webinar on July 11, 2017, or watch a recording after this date on this page.

Interested in this webinar?