Ethical and Sustainable Research Practices Within Esports

On April 5th, I was invited to speak at the College Esports Expo at Emerson College in Boston on a panel with research experts in the esports field. The subject matter of the panel  covered subjects related to ethical and sustainable research practice within the esports space. I sat as the industry voice on a panel featuring Spencer Kimball, head of Emerson College Polling and TL Taylor a sociologist with over 20 years of experience studying online communities and games.  We tackled subjects ranging from the recent Cambridge Analytica scandal, to ethical use of data in an industry setting, and cultural differences in global audiences.
In this blog post I’ll be taking an opportunity to revisit some of the questions presented to the panel and dive into a little more depth with the answers. The format will be question and answer based derived from the question list I was given prior to the panel, though we may not have made it to every subject in this article on the panel itself, I figured I would take the opportunity to flesh out the answers a bit more in this post.
In retrospect, this panel and the expo it was a part of, provide a powerful platform for networking and collaboration. An open dialogue like the College Esports Expo provides an opportunity for professionals in the esports space to have discussions removed from just business and create space for discussions of industry best practice. While this post may only cover my responses to the issues in a bit more depth I hope that it opens up the floor for further dialogue on the subjects of research outputs and the best ways to study the esports space, both from an academic and applied perspective.

Q: Where is the most common source of research in the esports space?

A: Common sources of research tend to be one off white papers, small organizations, or analytics companies, but on the industry side most of that research relies on outdated paradigms and methodologies.
From an industry standpoint a lot of research is more or less nonexistent. Due to the proprietary nature of a lot of analytics companies we end up with a lot of “findings” that come from studies with no clear cut methodology, zero information on sampling, and a general ethos of “trust us, we’re the experts, and we always have been”. On the other hand we see a good bit of marketing and segmentation research which is often reductionistic and informed by a post-WWII consumerist model.
The issue that kind of model raises is one that is based on the Field of Dreams idea: “If you build it they will [buy]”. This paradigm ends up reducing audiences to predefined segments that fit within a market ideal and may not reflect the cultural contexts of people in the esports space. Needless to say, FanAI is doing its best to bring ethical, more academically informed, research to the table on the industry side of things.

Q: What are some of the advantages and disadvantages of different methodologies?

A: No method is inherently perfect, from a business standpoint we consider the needs of clients before we choose methods for a project and we use mixed methods approaches for the best possible outcomes.
This is actually a question we think about a lot at FanAI, since we deal not only with Big Data at scale but Thick Data that we can collect ethnographically. When we approach projects we have to understand the needs of our clients and the kinds of data and outputs they value, thus we need to be intimately aware of the advantages and disadvantages of particular methodologies that we may want to deploy.
I can only really speak in depth about ethnographic methods, but lets say that a client is interested in understanding what their audience thinks about the way they run events. In that scenario what we can do is not only go to the events and do intercept interviews with attendees, but we can also do follow up in depth interviews (IDI’s) to reflect with attendees on certain aspects of the event. In this scenario one of the main advantages of participant observation at an event is that we get to see and experience the same sorts of things our participants do, thus we are able to point to specifics about an event in an interview.
A disadvantage to this method is recruiting participants at an event. In the case of intercept interviews we often end up using a convenience sample which means we end up talking to whoever will talk to use without screening or filtering for demographics. In some cases this can lead to a sample that is demographically skewed. In other cases we may employ a screener to get a more representative sample for remote interviews, but in that scenario we lose out on the shared experience part of participant observation. I’m lucky in the sense that at FanAI we also have access to Big Data analytics as part of the platform. With that I have the additional advantage of seeing things like demographic and spend data from specific audiences to better know how to build representative samples and to further investigate contexts behind the spend data.
Any group of methods is going to have its advantages and disadvantages, no methodology is perfect which is why we often end up using multiple methodologies to build the strongest research project possible given a set of questions.

Q: What is the responsibility of researchers to present accurate data and not just what a client wants.

A: It is the ethical and professional responsibility of researchers to present accurate data, regardless of what a client wants.
This is one area where, ethically speaking, you cannot compromise. If you sacrifice the integrity of your data once to tell a client the story they want to hear then you undermine any other data collection you do. As researchers we present the facts of the data and sometimes that means telling the client that the thing they want to do, the sponsor they want to pursue, or the activation they want to run is not going to go as well as they think.  We have an ethical responsibility to represent our data accurately, that is what separates research and data science from hype statistics and assumptions based market research.
If you establish yourself as an honest, facts based source of information, even when that information is hard to swallow, you build a reputation for good research and methods. That is the only way to feasibly build trust in your research, if you compromise on that and bend the truth or manipulate your data in such a way as to willingly get certain answers, then you are no longer doing research you are fabricating justification.

Q: How do the research questions that a university researcher may have differ from those that a sponsor may have?

A: The questions someone in the industry tend to be more concrete and applied than the core questions asked by an academic. Using these applied questions to guide academic inquiry can create powerful applied insights.
Coming from an academic background as an anthropologist, I often end up having questions that have to do with the deeper cultural contexts, meaning making, and concepts related to social capital and power within a community. The questions I want to ask or the questions I am wired to ask for this background are often less concrete than those of clients. This can sometimes end up being a point of frustration as the ways that I think about questions are often different. For instance a client may want to know specific concrete things: “Will my audience like X sponsor?” or “What kind of events are my audience interested in?” While these questions may be direct they often deal with single cases rather than getting at why an audience may enjoy events or what makes a sponsor trustworthy.
Most of the questions sponsors and clients have are very applicable and can serve as a great baseline to get some of the deeper questions that inform the application. I find that in a lot of cases this can help ground me in the concrete and applicable questions. I still get to investigate the more esoteric stuff that is part of that, but in the applied world research questions need to be guided by real solutions and outputs.

Q: How does your research end up getting communicated back to clients in a professional or deliverables context?

A: We cate deliverables to client needs, sometimes that means a slide deck, sometimes that means a one or two page mini report on a specific subject. In an industry context we steer away from long winded reports no one at the top level reads them.
As with the questions and methods that guide research the outputs tend to be catered to a client’s needs.  In the case of FanAI that also goes for the type of data a client values. In some cases a client may just want quantitative insight, or may not want to invest in a research project geared toward their audience. In some cases we may have clients that want to know what we already know and are satisfied with that, and in other cases we have clients that want us to devote 100% of our research efforts to projects related to their audience and specific questions.
I’ll be straight with you, no one in a professional deliverables context is going to read a huge research report, no matter how compelling the findings. In most cases we end up communicating results through short meetings or slide decks. Sometimes that also includes one or two page micro-reports on very specific subjects are produced. Due to the nature of some of the ethnographic work we do, I end up producing standard insight reports for certain subjects that may not be client specific. In doing so, we end up with a good spread of boilerplate reports that we are able to update as needed with new analysis and that we can easily distribute to demonstrate the capabilities of the research we do.

Q: With pressure coming from universities to create more corporate partnerships to attract more funding and draw research grants – what is the responsibility of researchers in protecting that data and ensuring that it isn’t misused?

A: Researchers are responsible for following ethical best practices whether in an academic institution or a business context. There exist rigorous codified practices for human subjects research and they are not that hard to adapt to business contexts.
While I may come at this issue from the industry side, I came from a university where thesis projects were often corporate partnerships. In every case of this, the faculty advisers and the mentors on the corporate side worked with students to make sure that projects met Internal Review Board (IRB) standards. That may sound daunting, but it was a lot easier than you would think to have a project in an industry context meet academic ethics requirements. Institutions have done a lot of work to write out relatively straight forward ethics around human subjects research thanks to medicine and psychology.
Social scientists are held to the same standards on a university level to the medical profession as both count as “human subjects research” — as such, things like informed consent, protection of personal identifiable information(PII), and protection of minors’ rights a pretty standard for any research project. It really is not that difficult to take those principles and apply them to an industry context. It is the responsibility of researchers as a part of being a trained professional in our field to uphold those ethical standards at every point of the process as a matter of course. If we want to change the paradigm of fast and loose market research we have to be the first to commit to best practices.

Q: Where should the line be drawn between protected research data and commercial exploitation?

A: All data is protected as a matter of course, there should be no discussion as to if you protect data or not.
As stated in the previous response, as far as I am concerned the line is drawn you protect the data of the people you research. This means anonymizing PII data, securing data properly and following the same ethical principles that any peer reviewed project is held to. The loophole of propriety gets used too often in this sense, just because your information and outputs are proprietary information does not mean that it cannot be ethically acquired and utilized.  Hiding bad or shady research practices behind a wall of “we can’t have our competition knowing our secrets” only ends up hurting the trust that clients and the general public may have in data.
Obscuring research practices and exploiting data essentially gives a company the right to say whatever they want, if the origin of the data and the methodology behind it is considered unimportant or trivial in relation to the findings you may as well be making it up as you go along. When we talk about assumptions based models or productivity based models of research, this is usually what is being referred to.  More or less if you hold up a screen of propriety in front of your data practices you can manipulate your sampling, your data, and your findings to fit whatever narrative is convenient.
This of dovetails quite well into the last question, so without further ado.

Q: Is there a need for more verified data and statistics on esports growth, revenue projections, audience size and sponsorship metrics? Is Super Data  New Zoo and Nielsen Esports data reliable?

A: There is a need for better research industry-wide, we see a lot of corporate black-boxing and hype statistics, but with no one there to challenge any of it, that data gets accepted as fact. The more people researching something the easier it is to create a peer reviewing process.
In my opinion there is a need for better data and research practice on an industry side in general. We end up with a lot of hype statistics around esports there’s evidence of selective and biased sampling and the numbers seem a bit conflated. It’s one of the reasons we dedupe our audience data, to get a better idea of the actual size of the audience.
When it comes to ideas of best practices and better measurement we have to be critical of both methodologies used in measurement and the means of measuring. There is a place for all kinds of methods of investigation in the esports sector, in fact we need a variety of methods and background to even get close to understanding how the esports space works. We also need academia to step in to provide a different perspective and set of goals than those held by those on the professional side of things. Like with any sort of scientific endeavor the more people we have investigating something and the more discussion that goes on about esports the more we can all hold one another accountable to best practices.
The details of research practice get hammered out in forums like this over time. By creating an open dialogue between professionals, academics, stakeholders, and the esports community itself we can begin to grasp how best practices might look in the business of esports research. One thing is clear, at least in my opinion, the current state of things is muddy data isn’t often peer reviewed and numbers are often inflated, but when there are only a handful of organizations doing any sort of analysis and none of them are talking about what they do we lose out on an opportunity to refine methodologies and establish best practices.