Digital Learning Mission Statement

As a developing leader in the digital education space, it’s vital to understand and articulate what guiding principles and values will shape my priorities and research in the present and also guide my work in the future.  These same values should, of course, be reflected in my own digital footprint in as much as they inform my approach to leadership.    

As a digital citizen advocate (ISTE Standard 7) and admissions official in higher education, I hope to elevate and address issues of access and equity, champion authenticity and integrity in digital spaces, and empower students, faculty, and staff to hold themselves accountable for their actions, roles and responsibilities in digital spaces.   

Access:   

Issues pertaining to access and equity in the higher education admissions world have long persisted, but they’ve been especially well-documented in recent years as controversy after controversy has made headlines.   

In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not, however, limited to misguided actions of wealthy, overzealous parents; it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019).  This event—and others like it—highlight the fact that admissions decisions are not always objective and merit-based, and that those with the resources to game the system often do.  

Unfortunately, there are many ways in which bias and inequity infiltrate the admissions process and undermine the accessibility of a high-quality college education.  Standardized tests like the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions come with their fair share of concerns.  Research has oft-revealed how racial bias affects test design, assessment, and student performance, thus bringing biased data into the admissions process to begin with (Choi, 2020).    

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al., 2020).  

It should go without saying that higher education institutions (HEIs) must seek to implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies.  Though technology may not—and likely should not, on its own—be able to offer a comprehensive solution to admissions bias, there are certainly some digital tools that, when utilized thoughtfully by higher education admissions professionals, can assist in the quest to prioritize equitable admissions practices, addressing challenges and improving higher education communities for students, faculty, and staff alike (ISTE standard 7a).  

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to AI and algorithms for assistance in many aspects of the admissions and recruiting process.  Platforms such as the Education Cloud by Salesforce “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020).    

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that predictive tools must also account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow situationally and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   

Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al. (2020) study suggest that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings (Alvero et al, 2020).   

Regardless of its specific function in the process, AI and algorithms have the potential to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.  

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of higher education admissions, it’s still wise to think critically about the function of AI and algorithms in order to ensure they’re helping more than they’re hurting.  There is a persistent and reasonable concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias.  As author Ruha Benjamin states in Race After Technology: Abolitionist Tools for the New Jim Code (2019), “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12) Thus, there is always a balance to be struck where technology is concerned. 

As an admissions official at a small, private, liberal arts institution, I am well aware of the challenges presented to HEI recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism in order to allow tech tools to reach their fullest potential in helping address issues of access, equity, and bias in higher education admissions (ISTE standard 7a).  

Authenticity:   

Anyone with a digital footprint has participated in creating—knowingly or unknowingly—a digital identity for themselves.  Virtual space offers people the freedom “…to choose who they want to be, how they want to be, and whom they want to impress, without being constrained by the norms and behaviors that are desirable in the society to which they belong” (Camacho et al., 2012, p. 3177).  The internet provides new opportunities for spaces for learning, working, and socializing, and all of these spaces offer opportunities for identity to be renegotiated (Camacho, 2012). It is a worthy—if not simple—endeavor to pursue authenticity and integrity within any kind of digital identity, digital media representation, or online interaction.  

Interactive communication technologies (ICTs) complicate meaningful pursuits of authenticity. These digitally-mediated realms of human interaction challenge what we see as authentic and make it harder to tell the difference between what is “real” and what is “fake” (Molleda, 2010).  One need look no further than “reality” TV, social media personas, and journalistic integrity in the era of “fake news” to understand that not all claims of authenticity in media are substantive.   

This does not mean, however, that authenticity in digital space fails to have inherent value or is impossible to achieve.  An ethic of authenticity goes beyond any kind of plan, presentation, or strategic marketing campaign; authenticity is about presenting the essence of what already exists and whether (or not) it has the ability to live up to its own and others’ expectations and needs (Molleda, 2010).  Exercising authenticity includes making informed decisions about protecting personal data while still curating the digital profile one intends to reflect (ISTE Standard 7d).  Exercising authenticity also contributes to a wise use of digital platforms and healthy, meaningful online interactions (ISTE standard 7b).  

In a comprehensive literature review, Molleda (2010) found that several pervasive themes, definitions, and perceptions of authenticity consistently surfaced across a variety of disciplines.  Taken as a whole, Molleda (2010) asserts that these claims may be used to “index” or measure authenticity to the extent that they are present in any given communication or media representation.  Some of these key aspects of authenticity include:  

  1. Being “true to self” and stated core values  
  2. Maintaining the essence of the original (form, idea, design, product, service, etc.)  
  3. Living up to others’ expectations and needs (e.g. delivering on promises)  
  4. Being original and thoughtfully created vs. mass produced  
  5. Existing beyond profit-making or corporate/organizational gains  
  6. Deriving from true human experience  

Molleda (2010) concludes that consistency between “the genuine nature of organizational offerings and their communication is crucial to overcome the eroding confidence in major social institutions” (p. 233). I for one hope to continue embodying an ethic of authenticity in both my personal and professional work—in digital spaces and otherwise—in order to set the stage for that consistency and to bolster societal confidence in the institution I’m a part of.  

Accountability  

The digital realm is an extended space where human interactions take place, and the volume of interactions taking place in digital spaces is only increasing.  As with any segment of human society, human flourishing, creativity, and innovation takes place in spaces where people feel safe and invested in the community in which they find themselves. Thus, as a digital leader, it is important to empower others to seriously consider their individual roles and responsibilities in the digital world, inspiring them to use technology for civic engagement and improving their communities, virtually or otherwise (ISTE standard 7a).  

Humans do not come ready-made with all of the savvy needed to engage with media and online communications in wise ways.  Media literacy education is needed to provide the cognitive and social scaffolding that leads to substantive, responsible civic engagement (Martens & Hobbs, 2015).  Media literacy is also a subset of one’s own ethical literacy, which is the ability to articulate and reflect upon one’s own moral life in order to encourage ethical, reasoned actions.  As a digital citizen advocate and marketing professional, I hope to support educators in all kinds of contexts—both personal and professional—to examine the sources of online media, be mindful consumers of online content, and consistently identify underlying assumptions in the content we interact with (ISTE standard 7c).  In an effort to be digitally wise and to “beat the algorithm” in digital spaces (both literally and metaphorically speaking), we must self-identify potential echo chambers and intentionally seek out alternative perspectives.  This requires a commitment to media literacy education in all kinds of formal and informal environments.    

Media literacy education equips educational leaders and students to foster a culture of respectful, responsible online interactions and a healthy, life-giving use of technology (ISTE standard 7b).  According to Hobbs (2010), media literacy is a subset of digital literacy skills involving:  

  1. the ability to access information by locating and sharing materials and comprehending information and ideas  
  2. the ability to create content in a variety of forms, making use of digital tools and technologies;   
  3. the ability to reflect on one’s own conduct and communication by applying social responsibility and ethical principles; (ISTE standard 7b)  
  4. the ability to take social action by working individually and collaboratively to share knowledge and solve problems as a member of a community; (ISTE standard 7a)  
  5. the ability to analyze messages in a variety of forms by identifying the author, purpose, and point of view and evaluating the quality and credibility of the content. (ISTE standard 7c)   

In a study conducted with 400 American high school students, findings showed that students who participated in a media literacy program had substantially higher levels of media knowledge and news/advertising analysis skills than other students (Martens & Hobbs, 2015).  Perhaps more importantly, information-seeking motives, media knowledge, and news analysis skills independently contributed to adolescents’ proclivity towards civic engagement (Martens & Hobbs, 2015), and civic engagement naturally requires dialogue with others within and outside of an individual’s immediate circle.  In other words, the more students were able to critically consider the content they were consuming and the motives behind why they were consuming it, the more they wanted to engage with alternative perspectives and be active, responsible, productive members of a larger community (ISTE standard 7a).  

This particular guiding principal is large in scope; it’s importance and relevance isn’t limited to a specific aspect of my professional context as much as it helps define an ethos for all actions, communication, and consumption that takes place in the digital world.  In order to hold ourselves accountable for our identities and actions online we must exercise agency.  The passive internet user/consumer is the one most likely to get caught in an echo chamber, develop destructive online habits, and communicate poorly in virtual space. The digitally wise will make consistent efforts to challenge their own thinking, create safe spaces for communication, intentionally seek out alternative voices, and actively reflect on their contributions to an online community, ultimately making digital spaces a little bit better than when they found them.  

References:  

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871.  

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.  

Camacho, M., Minelli, J., & Grosseck, G., (2012). Self and identity: Raising undergraduate students’ awareness on their digital footprints. Procedia Social & Behavioral Sciences, 46, 3176-3181.  

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing  

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/  

Hobbs, R. (2010). Digital and media literacy: A plan of action. The Aspen Institute Communications and Society Program & the John S. and James L. Knight Foundation. https://knightfoundation.org/reports/digital-and-media-literacy-plan-action/  

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in  

Martens, H. & Hobbs, R. (2015). How media literacy supports civic engagement in a digital age.  Atlantic Journal of Communication, 23, 120–137.  

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0  

Molleda, J. (2010). Authenticity and the construct’s dimensions in public relations and communication research. Journal of Communication Management,14(3), 223-236. http://dx.doi.org.ezproxy.spu.edu/10.1108/13632541011064508   

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong  

Bias in Higher Ed Admissions: Is New Tech Helping or Hurting?

It’s fairly well known that higher education admissions practices have made headlines in recent years, and issues of access and equity have been at the heart of the controversies. In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not limited to misguided actions of wealthy, overzealous parents, however, and it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019). 

Harvard University has also seen its fair share of scandals including a bribery scheme of its own and controversy over racial bias in the admissions process.  In 2019, a group of Harvard students organizing under the title “Students for Fair Admissions” went to court over several core claims:

  1. That Harvard had intentionally discriminated against Asian-Americans
  2. That Harvard had used race as a predominant factor in admissions decisions
  3. That Harvard had used racial balancing and considered the race of applicants without first exhausting race-neutral alternatives.
Demonstrators hold signs in front of a courthouse in Boston, Massachusetts in October 2018, Xinhua/Barcroft Images

In line with the tenants of affirmative action, the court eventually ruled that Harvard could continue considering race in its admissions process in pursuit of a diverse class, and that race had never (illegally) been used to “punish” an Asian-American student in the review process (Hassan, 2019).  Yet regardless of the ruling, Harvard was forced to look long and hard at its admissions processes and to meaningfully consider where implicit bias might be negatively affecting admissions decisions.

Another area of bias that has been identified in the college admissions system nationwide is the use of standardized tests, especially the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions.  Changes in demand for these tests have only accelerated during the pandemic with many colleges and universities making SAT/ACT or GRE/GMAT scores optional for admission in 2020-2021 (Koenig, 2020).  Research has oft-revealed how racial bias affects test design, assessment, and performance on these standardized exams, thus bringing biased data into the admissions process to begin with (Choi, 2020). 

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al, 2020).

So the question remains—how do higher education institutions (HEIs) implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies?  To assist in this worthwhile quest for equity, many HEIs are turning to algorithms and AI to see what they have to offer. 

Lending a Helping Hand

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to the Salesforce Education Cloud (and the AI and algorithmic tools within) for assistance in many aspects of the admissions and recruiting process.  The Education Cloud and other similar platforms “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020). 

As a smaller university with limited resources, the Education Cloud helps Taylor’s admissions officers zero-in on the type of applicants they feel are most likely to enroll, and then identify target populations that exhibit similar data sets in other areas of the country based on that data.  Taylor can then strategically and economically make recruiting efforts where they’re—statistically speaking—likely to get the most interest.  With fall 2015 boasting their largest Freshman class ever, Taylor is, in many ways, a success story, and Taylor now uses Education Cloud data services to predict student success outcomes and make decisions about distributing financial aid and scholarships (Pangburn, 2019).

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that the Salesforce AI predictive tools somehow account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow “situationally” and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al (2020) study mentioned earlier suggested that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings.

Another interesting proposal for the use of tech in the admissions process is the gamification of data points.  Companies like KnackApp are marketing recruitment tools that would have applicants play a game for 10 minutes.  Behind the scenes, algorithms allegedly gather information about users’ “microbehaviors,” such as the types of mistakes they make, whether those mistakes are repeated, the extent to which the player takes experimental paths, how the player is processing information, and the player’s overall potential for learning (Koenig, 2020). The CEO of KnackApp, Guy Halftek, claims that colleges outside the U.S. already use KnackApp in student advising, and the hope is that U.S. colleges will begin using the platform in the admissions process to create gamified assessments that would provide additional data points and measurements for desirable traits that might not otherwise be found in standardized test scores, GPA, or an entrance essay (Koenig, 2020).

Sample screenshot of a KnackApp game, apkpure.com

Regardless of its specific function in the overall process, AI and algorithms are being pitched as a way to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.

What’s The Catch?

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of HEI admissions, it’s wise to think critically about the function of AI and algorithms and whether or not they are in fact assisting in a quest for equity.

To begin with, there is a persistent concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias (Benjamin, 2019).  As Ruha Benjamin states in the 2019 publication Race After Technology: Abolitionist Tools for the New Jim Code, “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12)

Some areas of concern with using AI and algorithms in college admissions include:

  1. Large software companies like Salesforce seem to avoid admitting that bias could ever be an underlying issue, and instead seem to market that they’ve “solved” the bias issue (Pangburn, 2019).
  2. Predictive concerns: if future decisions are made on past data, a feedback loop of replicated bias might ensue (Pangburn, 2019).
  3. If, based on data, universities strategically market only to desirable candidates, they’ll likely pay more visits and make more marketing efforts to students in affluent areas and those who are likely to yield more tuition revenue (Pangburn, 2019).
  4. When it comes to “data-based” decision-making, it’s easier to get data for white, upper-middle-class suburban kids, and models (for recruiting goals, student success, and graduation outcomes) end up being built on easier data (Koenig, 2020).
  5. Opportunities for profit maximization are often rebranded as bias minimization, regardless of the extent to which that is accurate (Benjamin, 2019)
  6. Data privacy… (Koenig, 2020)

Finally, there’s always the question of human abilities and “soft skills,” and to what extent those should be modified or replaced by AI in any professional field.  There’s no denying the limitations AI and algorithms face in making appropriate contextual considerations.  For example, how does AI account for a high school or for-profit college that historically participates in grade inflation?  How does AI account for additional challenges faced by a lower income or first-generation student? (Pangburn, 2019)  There are also no guarantees that applicants won’t figure out how to “game” data-based admissions systems down the road by strategically optimizing their own data, and if/when that happens, you can bet that the most educated, wealthiest, highest-resourced students and families will be the ones optimizing that data, therefore replicating a system of bias and inequity that already exists (Pangburn, 2019).

As an admissions official at a small, liberal arts institution, I am well aware of the challenges presented to recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding efforts towards equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of Ruha Benjamin in The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism.

References:

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/

Hassan, A. (2019, November 5). 5 takeaways from the Harvard admissions ruling. The New York Times. https://www.nytimes.com/2019/10/02/us/takeaways-harvard-ruling-admissions.html

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong