This week, players and coaches from the Philadelphia Eagles and the Kansas City Chiefs will spend countless hours in film rooms in preparation for the Super Bowl. They’ll research positions, plays, and formations, attempting to identify opponent trends that they may exploit while also reviewing their own footage to shore up flaws.
Engineers at Brigham Young University are developing new artificial intelligence technologies that might drastically reduce the time and expense of film study for Super Bowl-bound teams (and all NFL and college football teams), while also improving game strategy by leveraging the power of big data.
D.J. Lee, a BYU professor, master’s student Jacob Newman, and Ph.D. students Andrew Sumsion and Shad Torrie are employing artificial intelligence to automate the time-consuming process of manually evaluating and annotating game video. The researchers used deep learning and computer vision to construct an algorithm that can reliably detect and classify players from game footage and establish the composition of the attacking team—a process that may take the time of a swarm of video assistants.
“We were talking about it and thought, whoa, we could definitely train an algorithm to do this,” said Lee, an electrical and computer engineering professor. “So we scheduled a meeting with BYU Football to understand their methodology and realised right away, yep, we can do this a lot quicker.”
While the study is still in its early stages, the team’s system has already achieved more than 90% accuracy in player recognition and tagging, as well as 85% accuracy in predicting formations. They think that the technology will ultimately replace the wasteful and time-consuming practise of human annotation and analysis of recorded footage employed by NFL and collegiate teams.
Lee and Newman began by reviewing actual game film given by the BYU football team. They recognised they required some extra perspectives to fully train their algorithm when they began to examine it. So they purchased a copy of Madden 2020, which depicts the field from above and behind the offensive, then categorised 1,000 photographs and videos from the game manually.
They utilised those photos to train a deep-learning algorithm to identify the players, which was then fed into a Residual Network framework to determine what position the players were playing. Finally, their neural network utilises the location and position data to identify the formation (from a list of more than 25) the offence is employing—anything from the Pistol Bunch TE to the I Form H Slot Open.
Lee claims that when the player positioning and labelling information is right, the computer can successfully recognise formations 99.5% of the time. The I Formation, which has four men lined up one in front of the other—center, quarterback, fullback, and running back—proved to be one of the most difficult to detect.
According to Lee and Newman, the AI system might potentially be used in other sports. In baseball, for example, it may pinpoint player locations on the field and find frequent trends to help teams improve how they defend against certain hitters. It might also be used to find soccer players in order to create more efficient and successful formations.
The BYU technique is described in full in a journal paper titled “Automated Pre-Play Analysis of American Football Formations Using Deep Learning,” which was just published in a special edition of Advances in Artificial Intelligence and Vision Applications in Electronics.
“Once you have this data, you can do a lot more with it; you can take it to the next level,” Lee said. “Big data may assist us understand this team’s plans or the patterns of that coach. It may help you determine if they will go for it on 4th Down and 2 or punt. The concept of applying AI in sports is fascinating, and if we can offer them even a 1% edge, it will be worthwhile.”
You might also be interested in reading, A flexible robotic tentacle with dynamic cooling control