Humans enjoy providing a voiceover to the antics of their cats, dogs, lizards or other pets. Social media proves that. In our house, every pet has a specific voice, from the droll regal accent of Reggie the Ball Python (also called a Royal Python) to the sardonic, streetwise commentary of Tigra the Cat, who started life under our shed. As hilarious and appropriate that a British accent is for a Python, Reggie doesnât really paraphrase the Dead Parrot Sketch while eating a not-just-resting rat. Nor does Tigra wantz a cheezburger. We may project our own perceptions of our petsâ minds onto them, but weâre doing nothing beyond anthropomorphism.
I want my free, daily news update from Petfood Industry.
In the pet food industry, though, the real mental states of dogs and cats matter. For example, new product development depends on decoding animalsâ mental motivations. During feeding trials, understanding whatâs going on in a dogâs head can only be interpreted by observing behaviors or analyzing bodily fluids and feces. Although similar techniques are used in human and pet preference taste tests, a dog canât answer a questionnaire like their primate counterparts. Much of what makes a dog or cat choose one food over another can only be inferred by researchers.
While Dr. Doolittleâs dream remains elusive, advances in brain scanning and analysis have opened a window into how dogsâ brains reconstruct what they see. Researchers at Emory University found evidence that we should probably be using more verbs when overdubbing our dogâs antics. For pet food professionals, getting inside the head of a hound could provide insight into how vision and other perceptions influence dog food preference.
Adapted from a press release:
Dogs may be more attuned to actions than to who or what is doing that action.
The researchers recorded the fMRI neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. They then used a machine-learning algorithm to analyze the patterns in the neural data.
âWe showed that we can monitor the activity in a dogâs brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,â Gregory Berns, Emory professor of psychology, said.
The project was inspired by recent advancements in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of perception. Beyond humans, the technique has been applied to only a handful of other species, including some primates.
âWhile our work is based on just two dogs it offers proof of concept that these methods work on canines,â first author of the study Erin Phillips said. Phillips conducted the research while a researcher of Bernsâ Canine Cognitive Neuroscience Lab. âI hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.â
The Journal of Visualized Experiments published the results of the research.
Berns and colleagues pioneered training techniques for getting dogs to walk into an fMRI scanner and hold completely still and unrestrained while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unrestrained dog. That opened the door to what Berns calls The Dog Project â a series of experiments exploring the mind of the oldest domesticated species.
Over the years, his lab has published research into how the canine brain processes vision, words, smells and rewards such as receiving praise or food.
Meanwhile, the technology behind machine-learning computer algorithms kept improving. The technology has allowed scientists to decode some human brain-activity patterns. The technology âreads mindsâ by detecting within brain-data patterns the different objects or actions that an individual is seeing while watching a video.
âI began to wonder, âCan we apply similar techniques to dogs?ââ Berns recalls.
The first challenge was to come up with video content that a dog might find interesting enough to watch for an extended period. The Emory research team affixed a video recorder to a gimbal and selfie stick that allowed them to shoot steady footage from a dogâs perspective, at about waist high to a human or a little bit lower.
They used the device to create a half-hour video of scenes relating to the lives of most dogs. Activities included dogs being petted by people and receiving treats from people. Scenes with dogs also showed them sniffing, playing, eating or walking on a leash. Activity scenes showed cars, bikes or a scooter going by on a road; a cat walking in a house; a deer crossing a path; people sitting; people hugging or kissing; people offering a rubber bone or a ball to the camera; and people eating.
The video data was segmented by time stamps into various classifiers, including object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing or eating).
Only two of the dogs that had been trained for experiments in an fMRI had the focus and temperament to lie perfectly still and watch the 30-minute video without a break, including three sessions for a total of 90 minutes. These two âsuper starâ canines were Daisy, a mixed breed who may be part Boston terrier, and Bhubo, a mixed breed who may be part boxer.
âThey didnât even need treats,â says Phillips, who monitored the animals during the fMRI sessions and watched their eyes tracking on the video. âIt was amusing because itâs serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.â
Two humans also underwent the same experiment, watching the same 30-minute video in three separate sessions, while lying in an fMRI.
The brain data could be mapped onto the video classifiers using time stamps.
A machine-learning algorithm, a neural net known as Ivis, was applied to the data. A neural net is a method of doing machine learning by having a computer analyze training examples. In this case, the neural net was trained to classify the brain-data content.
The results for the two human subjects found that the model developed using the neural net showed 99% accuracy in mapping the brain data onto both the object- and action-based classifiers.
In the case of decoding video content from the dogs, the model did not work for the object classifiers. It was 75% to 88% accurate, however, at decoding the action classifications for the dogs.
The results suggest major differences in how the brains of humans and dogs work.
âWe humans are very object oriented,â Berns says. âThere are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.â
Dogs and humans also have major differences in their visual systems, Berns notes. Dogs see only in shades of blue and yellow but have a slightly higher density of vision receptors designed to detect motion.
âIt makes perfect sense that dogsâ brains are going to be highly attuned to actions first and foremost,â he says. âAnimals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.â
For Philips, understanding how different animals perceive the world is important to her current field research into how predator reintroduction in Mozambique may impact ecosystems. âHistorically, there hasnât been much overlap in computer science and ecology,â she says. âBut machine learning is a growing field that is starting to find broader applications, including in ecology.â
Additional authors of the paper include Daniel Dilks, Emory associate professor of psychology, and Kirsten Gillette, who worked on the project as an Emory undergraduate neuroscience and behavioral biology major. Gilette has since graduated and is now in a postbaccalaureate program at the University of North Carolina.
Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human experiments in the study were supported by a grant from the National Eye Institute.