Many combinations of movements occur on the field, court and rink. Analyzing these movements allows us to quantify parts of the game, which can generate performance indicators, assess drivers of success, help make more accurate calls, or simply allow viewers to enjoy the sport at a higher degree of detail. For these reasons, the sports entertainment industry has seen a rise in the use of sensors and multi-angle, video-capture analysis. Using several cameras to record the action has been a staple for some time, but widespread use of advanced sensors like radar and RFID has become more common. On the optical side of things, new abilities to analyze aspects of video offer opportunities to tag and quantify the action. In both cases, they work toward the goal of intelligently tracking the game.

Current quantification of aspects of the game

With the use of real-time sensors or post-event software analysis, we can render aspects of the action in fine detail. Sensors may track the location and movement of the ball over certain moments of time. For example, in baseball, Trackman measures the velocity and spin rate of incoming pitches, measures the exit velocity and trajectory of hits, and extrapolates the end distance and pathway of the ball through the air. We find local positioning and optical tracking in soccer with StatSports and Catapult, respectively. The NBA generates ball and player movement data using SecondSpectrum. The NFL incorporates Zebra RFID technology with tags on shoulder pads, as well as in footballs, to communicate their position and movement on the field.

The ability to generate physics-based statistics provides value to teams in their competitive analysis and training, but it also offers value to viewers interested in the details. Because sports teams and leagues exist in a competitive space as they constantly vie for victory and viewers’ attention, such statistical abilities play a large role in their success.

In regards to leveraging data in such competitive environments, IT leader Mark Hurd tells us, “The way to overachieve in those situations is to have better information, to know who you’re talking to, and to know how to motivate the customer. To seize the very unique opportunity, you have to differentiate that relationship and be nimble enough with your technology to capitalize on that moment.” In order to do this, he adds, “Data becomes very important, and the information, the ability to mine that information, [and] take advantage of that information at those unique opportunities.”

Tracking with sensors

To create valuable information, sensor technology may incorporate dual-polarized radar to capture data points that describe and define aspects of the action. This technology, first used in missile-detection systems and then in weather analysis, is now applied to sports, where it captures dozens of points from the players, clubs, bats, sticks, pucks and balls.

Using the locational data of these points over time, the system directly realizes the position, speed and movement of certain aspects of the motion. These sensor technology systems can use this information to infer many other related components of the action. The tracking of these points of interest, and the resulting interpolations and extrapolations, creates a host of metrics that can be used for biomechanical analysis and the understanding of outcomes. Athletes gain quantitative visibility into aspects like step locations, ball speed, spin rate, launch angle, direction and other variables connected to their gameplay. This information helps coaches identify mechanical inefficiencies, assess game plans and improve on both in a data-driven fashion. It also offers some fun stats to look at in the process.

In addition to radar, GPS and RFID technology positionally track players and game objects with the use of physical tags or hardware placed onto them. They create data describing player and ball location and movement over the course of the game, delivering metrics like total distance traveled, route shape and efficiency and movement velocity. Wearable sensors, such as gyroscopes and accelerometers, provide further information regarding locomotion. All this data contributes to the system’s ability to track the action and tag conditions (like dribbling or passing) based on their underlying movement characteristics. Combining this information with optical data lends itself to situational analysis that further helps gauge and quantify occurrences on the field.

The potential for optical tracking

Using cameras, we can quantify the specifics of events occurring in the game. They capture the action and record it into video, from which machine-learning analysis software finds reference points to trace over time. These points are derived from static, clearly-defined features that create anchors in the background, as well as through tracking regions on moving objects. With points and regions identified, the system aims to follow them frame-by-frame over the course of the video, logging their temporal locations and displacement rates in the composition. Combining these numbers with knowledge of the video’s frames-per-second rate, as well as data on the camera location relative to anchors in the shot, allows the software to extrapolate the real-world location and movement of the points. The quantitative tracking data generated from this process offers direct statistical and entertainment value if the points accurately represent the behavior of the object in question, or if the points themselves serve as the aspect of interest.

A single 2D shot may not provide enough clearly defined visual data in order to precisely follow the location and behavior of the points on the 3D object of interest. To help, analysts may feed the system additional video captured from different angles. Doing so allows the system to reference the locations and movements of points across various dimensions estimated through the 2D planes that describe  the same event from multiple angles. By turning the analysis from a 2D test into a multi-planar assessment of the action, systems can better map the 3D position and movement of points on the objects in the video. With precise mapping of the points parented to objects, the objects themselves can be better inferred, recognized and tracked over time.

These technologies might also eventually provide the ability to extrapolate player intent and micro-decision making. Using the tracking points, 3D models and their simulated physics, in combination with foot-step data and expected (and potentially individualized) reaction timings, we can work to determine, for instance, whether a player gave an honest attempt to divert movement or force before a collision. This in turn can help determine whether such collisions could have reasonably been avoided by players.

This statistical end game may consist of optical and sensor-based 3D replications that hyper-accurately mimic events that occurred in the sport. From there, we will theoretically possess the ability to generate any spatio-temporal metric, infer decision-making processes and analyze the combinations that result in event outcomes in the game. Such extents of these abilities may lie down the road, but we are approaching them.