Online Teaching & Learning in Higher Education During COVID-19 & Beyond: Pitfalls & Opportunities for Access & Equity

Like so many other countries across the globe, higher education institutions in South Africa were forced to reckon with a rapid pivot to online teaching/learning in order to maintain operations during the COVID-19 pandemic. Of the 26 public universities in South Africa, 25 are residential institutions that did not allow distance education prior to 2014 (Czerniewicz et al., 2020).  This scramble to change modalities in 2020 came on the coattails of nationwide student protests from 2015-2017 during which typical campus activities and courses were repeatedly interrupted at some of the largest public institutions because of the #FeesMustFall movement, a student-led initiative boycotting swift, large, and prohibitive hikes in tuition costs instituted by the South African government. At the heart of the #FeesMustFall movement was attention to the fact that systemic racism and resource inequalities left historically marginalized students most unable to cope with the tuition increase. Unsurprisingly, the scope and scale of online teaching/learning suddenly required during the Pandemic only further accentuated the obvious issues of access and equity as reflected by university students in South Africa, particularly in regards to the digital divide among the South African student population.  When students had access to usual campus infrastructures, they were able to utilize tools like free Wi-Fi, libraries, and computer labs which reduced some level of disparity in regards to technology access (Swartz et al., 2018). When this access was taken away, many existing inequalities were made starkly visible, and students without expansive resource networks were left adrift. 

“Across the nation, the pandemic revealed historic (and mostly forgotten) fault lines, and as silence settled down upon buzzing cities and communities and we all came to a standstill, we were forced to hear the tectonic layers pushing and shoving against one another, tectonic layers of intergenerational inequalities, unheard and ignored for too long.”

Czerniewicz  et al., 2020, para. 14

This quote might just as easily be referring to the United States, especially in the weeks and months following the murder of George Floyd in May of 2020 and the ongoing discourse about racial tensions and inequalities embedded in American systems.  Indeed, the relative ‘silence’ ushered in by the COVID-19 pandemic revealed the consistent hum of existing inequalities imbedded within communities, throughout countries, and across international borders which significantly impacted the ability for students at all levels to continue in their learning (or not).

Internet users in 2012 as a percentage of a country’s population
Source: International Telecommunications Union.

In light of the socio-political factors influencing teaching/learning in COVID-19, Czerniewicz et al. (2020) set out to analyze how issues of equity and inequality played out in the pivot to online teaching/learning in South African higher education during the pandemic, and how these concerns might have implications or offer guidance for the educational enterprise post-pandemic. In this case study, nine themes on access & equity emerged which, almost certainly, will find echoes among educators and school administrators worldwide.  Some themes serve as cautions and highlight system failures, while some highlight the possibilities and opportunities afforded through online teaching/learning: 

  1. Inequalities Made Visible: the crisis made preexisting inequalities and infrastructure failures starkly visible; this included poorly-constructed pedagogies that previously failed to meet the varied and nuanced needs of real university students (as opposed to the disembodied ‘ideal’ student), crisis notwithstanding. 
  1. Imbedded in Context: the sudden shift to online teaching/learning took place within embedded contexts where gender, culture, race, geopolitical context, etc., played a part in a student’s lived experience; all influencing factors must be considered intersectionally in the learning environment, online or otherwise. 
  1. Multimodal Strategies: it became clear that in order to even come close to meeting student needs for remote learning, a ‘multimodal’ or ‘hyflex’ approach was required; this meant course content had to be highly accessible through multiple media formats, some of which were not digital. 
  1. Making a Plan: pre-existing emergency plans for instruction at both the institutional and instructional level are a necessity and must include provisions for unreliable electrical power or internet access. 
  1. Digital Literacy: student levels of digital literacy and capacity for effective navigation of e-learning tools cannot be assumed; neither can assumptions be made of the faculty/staff responsible for implementing digital learning tools. 
  1. Places of Learning: in lockdown, many students, faculty, & staff, no longer had a dedicated space to be able to engage in their scholastic duties. Students/faculty/staff were unevenly impacted and had to make substantially different sacrifices depending on their circumstances (e.g. parents with young children at home, students caring for elderly relatives, etc.) 
  1. Parity of Pedagogy: the crisis forced learning design to become more student-centered than ever before. Though there were certainly gaps and failings, instructors were re-thinking assessment strategies and intervention options in comprehensive ways.  
  1. Sectoral Stratification: similar to the first theme, the pandemic highlighted existing inequalities, this time at the institutional level.  Larger/smaller, urban/rural, ranked/not ranked universities all faced different kinds of obstacles. Historically advantaged institutions fared better in their emergency responses. 
  1. Social Responsibility in Higher Education: The boundaries between higher education and larger society are porous. Universities cannot pretend they are neutral when it comes to social and economic inequities. 

Perhaps central to each of these themes is a need for student-centered learning design and careful consideration for the extent to which stakeholders have access to internet and a suitable device.  The pandemic has shown the urgent need to teach and support student learning no matter where they live or what resources they personally possess (Correia, 2020).  In support of the third theme listed above (multimodal pedagogical strategies), Correia (2020) offers an array of concrete tools and strategies for low-bandwidth online teaching/learning that can help mitigate the impacts of the digital divide in digital education environments: 

  1. Start designing a course with three assumptions in mind:  1) The student may have limited bandwidth, data, or internet access with which to participate in the course 2) The student may be much less familiar with the technology being used than the instructor, and 3) They may not have access to tech equipment like cameras, printers, and scanners 
  1. Make frequent contact and learn about student accessibility needs. Consider the use of postal mail (with postage cost covered), landline phone calls, chat check-ins, and asynchronous video messages. 
  1. Consider how to incorporate a students’ informal learning and life experiences into course assignments and objectives; in other words, lean in to student learning that occurs offline.
  1. Use free resources and tools profusely. OER Commons is just one example of a public digital library of open educational resources. Bear in mind, however, that where assignments are concerned, internet access may bar frequent usage, even if the tool is free.  
  1. Utilize pre-recorded lectures and transcripts for students unable to join synchronous video conferences 
  1. Use audio recordings as educational resources (e.g. podcasts), as well as for instructor-student communication.  Audio recordings often result in fewer tech issues and use less bandwidth; they mitigate the need for a camera along with possible feelings of intrusion or shyness that cameras can bring. 
  1. Use alternative forms of assessment which may include portfolios, open book examinations, or discussion forums. 

Of course, COVID-19 did not usher in the dawn of online education.  The demand for digital education in its various forms has been growing steadily over the course of the last decade, even prior to the pandemic (Xie et al., 2020). It’s increase in popularity can largely be credited to the possibilities it provides for access and equity, including opportunities for flexibility, efficiency, the promotion of innovative and student-centered teaching strategies, access to varied (and often free) sources of information, access to global research and collaboration, and increased access/reduced costs for higher education, especially for students who couldn’t otherwise afford to attend a residential university (Xie et al., 2020). The comprehensive demands for remote teaching/learning during the pandemic has merely accelerated the adoption and acceptance of online teaching/learning in all kinds of educational settings, and it’s fair to assume that a certain level of online teaching/learning integration will define the “new normal” in education moving forward (Xie et al., 2020).  Educators and educational institutions, then, must be able to recognize the potential pitfalls for access and equity as it pertains to digital education.  To the extent that online teaching/learning is here to stay, educators can’t afford to ignore student needs and the ways online teaching/learning might be insufficient to meet them.  And yet, there remains significant potential. 

“[The pandemic] has brought into focus numerous examples of extraordinary resilience, networks and…unexpected alliances of collaboration and support, including inspiring creativity, examples of technology used for equity purposes and moments of optimism. …There is an opportunity in the moment for genuine equity-focused innovation, policymaking, provision and pedagogy.” 

Czerniewicz et al., 2020

References 

Correia, A. (2020).  Healing the digital divide during the COVID-19 pandemic. Quarterly Review of Distance Education 21(1), 13-21. https://ezproxy.spu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip&db=a9h&AN=146721348&site=ehost-live 

Czerniewicz, L., Agherdien, N., Badenhorst, J., Belluigi, D., Chambers, T., Chili, M., de Villiers, M., Felix, W., Gachago, D., Gokhale, C., Ivala, E., Kramm, N., Madiba, M., Mistri, G., Mgqwashu, E., Pallitt, N., Prinsloo, P., Solomon, L., … Wissing, G. (2020). A wake-up call: Equity, inequality and Covid-19 emergency remote teaching and learning. Postdigital Science and Education2, 946–967. https://doi.org/10.1007/s42438-020-00187-4 

Swartz, B.C., Gachago, D. & Belford, C. (2018). To care or not to care – reflections on the ethics of blended learning in times of disruption. South African Journal of Higher Education 32(6), 49‒64. 

Xie, X., Siau, K., & Nah, F. (2020). COVID-19 pandemic – online education in the new normal and the next normal.  Journal of Information Technology Case and Application Research 22(2). https://doi.org/10.1080/15228053.2020.1824884 

Digital Learning Mission Statement

As a developing leader in the digital education space, it’s vital to understand and articulate what guiding principles and values will shape my priorities and research in the present and also guide my work in the future.  These same values should, of course, be reflected in my own digital footprint in as much as they inform my approach to leadership.    

As a digital citizen advocate (ISTE Standard 7) and admissions official in higher education, I hope to elevate and address issues of access and equity, champion authenticity and integrity in digital spaces, and empower students, faculty, and staff to hold themselves accountable for their actions, roles and responsibilities in digital spaces.   

Access:   

Issues pertaining to access and equity in the higher education admissions world have long persisted, but they’ve been especially well-documented in recent years as controversy after controversy has made headlines.   

In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not, however, limited to misguided actions of wealthy, overzealous parents; it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019).  This event—and others like it—highlight the fact that admissions decisions are not always objective and merit-based, and that those with the resources to game the system often do.  

Unfortunately, there are many ways in which bias and inequity infiltrate the admissions process and undermine the accessibility of a high-quality college education.  Standardized tests like the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions come with their fair share of concerns.  Research has oft-revealed how racial bias affects test design, assessment, and student performance, thus bringing biased data into the admissions process to begin with (Choi, 2020).    

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al., 2020).  

It should go without saying that higher education institutions (HEIs) must seek to implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies.  Though technology may not—and likely should not, on its own—be able to offer a comprehensive solution to admissions bias, there are certainly some digital tools that, when utilized thoughtfully by higher education admissions professionals, can assist in the quest to prioritize equitable admissions practices, addressing challenges and improving higher education communities for students, faculty, and staff alike (ISTE standard 7a).  

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to AI and algorithms for assistance in many aspects of the admissions and recruiting process.  Platforms such as the Education Cloud by Salesforce “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020).    

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that predictive tools must also account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow situationally and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   

Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al. (2020) study suggest that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings (Alvero et al, 2020).   

Regardless of its specific function in the process, AI and algorithms have the potential to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.  

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of higher education admissions, it’s still wise to think critically about the function of AI and algorithms in order to ensure they’re helping more than they’re hurting.  There is a persistent and reasonable concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias.  As author Ruha Benjamin states in Race After Technology: Abolitionist Tools for the New Jim Code (2019), “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12) Thus, there is always a balance to be struck where technology is concerned. 

As an admissions official at a small, private, liberal arts institution, I am well aware of the challenges presented to HEI recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism in order to allow tech tools to reach their fullest potential in helping address issues of access, equity, and bias in higher education admissions (ISTE standard 7a).  

Authenticity:   

Anyone with a digital footprint has participated in creating—knowingly or unknowingly—a digital identity for themselves.  Virtual space offers people the freedom “…to choose who they want to be, how they want to be, and whom they want to impress, without being constrained by the norms and behaviors that are desirable in the society to which they belong” (Camacho et al., 2012, p. 3177).  The internet provides new opportunities for spaces for learning, working, and socializing, and all of these spaces offer opportunities for identity to be renegotiated (Camacho, 2012). It is a worthy—if not simple—endeavor to pursue authenticity and integrity within any kind of digital identity, digital media representation, or online interaction.  

Interactive communication technologies (ICTs) complicate meaningful pursuits of authenticity. These digitally-mediated realms of human interaction challenge what we see as authentic and make it harder to tell the difference between what is “real” and what is “fake” (Molleda, 2010).  One need look no further than “reality” TV, social media personas, and journalistic integrity in the era of “fake news” to understand that not all claims of authenticity in media are substantive.   

This does not mean, however, that authenticity in digital space fails to have inherent value or is impossible to achieve.  An ethic of authenticity goes beyond any kind of plan, presentation, or strategic marketing campaign; authenticity is about presenting the essence of what already exists and whether (or not) it has the ability to live up to its own and others’ expectations and needs (Molleda, 2010).  Exercising authenticity includes making informed decisions about protecting personal data while still curating the digital profile one intends to reflect (ISTE Standard 7d).  Exercising authenticity also contributes to a wise use of digital platforms and healthy, meaningful online interactions (ISTE standard 7b).  

In a comprehensive literature review, Molleda (2010) found that several pervasive themes, definitions, and perceptions of authenticity consistently surfaced across a variety of disciplines.  Taken as a whole, Molleda (2010) asserts that these claims may be used to “index” or measure authenticity to the extent that they are present in any given communication or media representation.  Some of these key aspects of authenticity include:  

  1. Being “true to self” and stated core values  
  2. Maintaining the essence of the original (form, idea, design, product, service, etc.)  
  3. Living up to others’ expectations and needs (e.g. delivering on promises)  
  4. Being original and thoughtfully created vs. mass produced  
  5. Existing beyond profit-making or corporate/organizational gains  
  6. Deriving from true human experience  

Molleda (2010) concludes that consistency between “the genuine nature of organizational offerings and their communication is crucial to overcome the eroding confidence in major social institutions” (p. 233). I for one hope to continue embodying an ethic of authenticity in both my personal and professional work—in digital spaces and otherwise—in order to set the stage for that consistency and to bolster societal confidence in the institution I’m a part of.  

Accountability  

The digital realm is an extended space where human interactions take place, and the volume of interactions taking place in digital spaces is only increasing.  As with any segment of human society, human flourishing, creativity, and innovation takes place in spaces where people feel safe and invested in the community in which they find themselves. Thus, as a digital leader, it is important to empower others to seriously consider their individual roles and responsibilities in the digital world, inspiring them to use technology for civic engagement and improving their communities, virtually or otherwise (ISTE standard 7a).  

Humans do not come ready-made with all of the savvy needed to engage with media and online communications in wise ways.  Media literacy education is needed to provide the cognitive and social scaffolding that leads to substantive, responsible civic engagement (Martens & Hobbs, 2015).  Media literacy is also a subset of one’s own ethical literacy, which is the ability to articulate and reflect upon one’s own moral life in order to encourage ethical, reasoned actions.  As a digital citizen advocate and marketing professional, I hope to support educators in all kinds of contexts—both personal and professional—to examine the sources of online media, be mindful consumers of online content, and consistently identify underlying assumptions in the content we interact with (ISTE standard 7c).  In an effort to be digitally wise and to “beat the algorithm” in digital spaces (both literally and metaphorically speaking), we must self-identify potential echo chambers and intentionally seek out alternative perspectives.  This requires a commitment to media literacy education in all kinds of formal and informal environments.    

Media literacy education equips educational leaders and students to foster a culture of respectful, responsible online interactions and a healthy, life-giving use of technology (ISTE standard 7b).  According to Hobbs (2010), media literacy is a subset of digital literacy skills involving:  

  1. the ability to access information by locating and sharing materials and comprehending information and ideas  
  2. the ability to create content in a variety of forms, making use of digital tools and technologies;   
  3. the ability to reflect on one’s own conduct and communication by applying social responsibility and ethical principles; (ISTE standard 7b)  
  4. the ability to take social action by working individually and collaboratively to share knowledge and solve problems as a member of a community; (ISTE standard 7a)  
  5. the ability to analyze messages in a variety of forms by identifying the author, purpose, and point of view and evaluating the quality and credibility of the content. (ISTE standard 7c)   

In a study conducted with 400 American high school students, findings showed that students who participated in a media literacy program had substantially higher levels of media knowledge and news/advertising analysis skills than other students (Martens & Hobbs, 2015).  Perhaps more importantly, information-seeking motives, media knowledge, and news analysis skills independently contributed to adolescents’ proclivity towards civic engagement (Martens & Hobbs, 2015), and civic engagement naturally requires dialogue with others within and outside of an individual’s immediate circle.  In other words, the more students were able to critically consider the content they were consuming and the motives behind why they were consuming it, the more they wanted to engage with alternative perspectives and be active, responsible, productive members of a larger community (ISTE standard 7a).  

This particular guiding principal is large in scope; it’s importance and relevance isn’t limited to a specific aspect of my professional context as much as it helps define an ethos for all actions, communication, and consumption that takes place in the digital world.  In order to hold ourselves accountable for our identities and actions online we must exercise agency.  The passive internet user/consumer is the one most likely to get caught in an echo chamber, develop destructive online habits, and communicate poorly in virtual space. The digitally wise will make consistent efforts to challenge their own thinking, create safe spaces for communication, intentionally seek out alternative voices, and actively reflect on their contributions to an online community, ultimately making digital spaces a little bit better than when they found them.  

References:  

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871.  

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.  

Camacho, M., Minelli, J., & Grosseck, G., (2012). Self and identity: Raising undergraduate students’ awareness on their digital footprints. Procedia Social & Behavioral Sciences, 46, 3176-3181.  

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing  

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/  

Hobbs, R. (2010). Digital and media literacy: A plan of action. The Aspen Institute Communications and Society Program & the John S. and James L. Knight Foundation. https://knightfoundation.org/reports/digital-and-media-literacy-plan-action/  

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in  

Martens, H. & Hobbs, R. (2015). How media literacy supports civic engagement in a digital age.  Atlantic Journal of Communication, 23, 120–137.  

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0  

Molleda, J. (2010). Authenticity and the construct’s dimensions in public relations and communication research. Journal of Communication Management,14(3), 223-236. http://dx.doi.org.ezproxy.spu.edu/10.1108/13632541011064508   

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong  

Bias in Higher Ed Admissions: Is New Tech Helping or Hurting?

It’s fairly well known that higher education admissions practices have made headlines in recent years, and issues of access and equity have been at the heart of the controversies. In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not limited to misguided actions of wealthy, overzealous parents, however, and it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019). 

Harvard University has also seen its fair share of scandals including a bribery scheme of its own and controversy over racial bias in the admissions process.  In 2019, a group of Harvard students organizing under the title “Students for Fair Admissions” went to court over several core claims:

  1. That Harvard had intentionally discriminated against Asian-Americans
  2. That Harvard had used race as a predominant factor in admissions decisions
  3. That Harvard had used racial balancing and considered the race of applicants without first exhausting race-neutral alternatives.
Demonstrators hold signs in front of a courthouse in Boston, Massachusetts in October 2018, Xinhua/Barcroft Images

In line with the tenants of affirmative action, the court eventually ruled that Harvard could continue considering race in its admissions process in pursuit of a diverse class, and that race had never (illegally) been used to “punish” an Asian-American student in the review process (Hassan, 2019).  Yet regardless of the ruling, Harvard was forced to look long and hard at its admissions processes and to meaningfully consider where implicit bias might be negatively affecting admissions decisions.

Another area of bias that has been identified in the college admissions system nationwide is the use of standardized tests, especially the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions.  Changes in demand for these tests have only accelerated during the pandemic with many colleges and universities making SAT/ACT or GRE/GMAT scores optional for admission in 2020-2021 (Koenig, 2020).  Research has oft-revealed how racial bias affects test design, assessment, and performance on these standardized exams, thus bringing biased data into the admissions process to begin with (Choi, 2020). 

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al, 2020).

So the question remains—how do higher education institutions (HEIs) implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies?  To assist in this worthwhile quest for equity, many HEIs are turning to algorithms and AI to see what they have to offer. 

Lending a Helping Hand

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to the Salesforce Education Cloud (and the AI and algorithmic tools within) for assistance in many aspects of the admissions and recruiting process.  The Education Cloud and other similar platforms “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020). 

As a smaller university with limited resources, the Education Cloud helps Taylor’s admissions officers zero-in on the type of applicants they feel are most likely to enroll, and then identify target populations that exhibit similar data sets in other areas of the country based on that data.  Taylor can then strategically and economically make recruiting efforts where they’re—statistically speaking—likely to get the most interest.  With fall 2015 boasting their largest Freshman class ever, Taylor is, in many ways, a success story, and Taylor now uses Education Cloud data services to predict student success outcomes and make decisions about distributing financial aid and scholarships (Pangburn, 2019).

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that the Salesforce AI predictive tools somehow account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow “situationally” and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al (2020) study mentioned earlier suggested that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings.

Another interesting proposal for the use of tech in the admissions process is the gamification of data points.  Companies like KnackApp are marketing recruitment tools that would have applicants play a game for 10 minutes.  Behind the scenes, algorithms allegedly gather information about users’ “microbehaviors,” such as the types of mistakes they make, whether those mistakes are repeated, the extent to which the player takes experimental paths, how the player is processing information, and the player’s overall potential for learning (Koenig, 2020). The CEO of KnackApp, Guy Halftek, claims that colleges outside the U.S. already use KnackApp in student advising, and the hope is that U.S. colleges will begin using the platform in the admissions process to create gamified assessments that would provide additional data points and measurements for desirable traits that might not otherwise be found in standardized test scores, GPA, or an entrance essay (Koenig, 2020).

Sample screenshot of a KnackApp game, apkpure.com

Regardless of its specific function in the overall process, AI and algorithms are being pitched as a way to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.

What’s The Catch?

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of HEI admissions, it’s wise to think critically about the function of AI and algorithms and whether or not they are in fact assisting in a quest for equity.

To begin with, there is a persistent concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias (Benjamin, 2019).  As Ruha Benjamin states in the 2019 publication Race After Technology: Abolitionist Tools for the New Jim Code, “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12)

Some areas of concern with using AI and algorithms in college admissions include:

  1. Large software companies like Salesforce seem to avoid admitting that bias could ever be an underlying issue, and instead seem to market that they’ve “solved” the bias issue (Pangburn, 2019).
  2. Predictive concerns: if future decisions are made on past data, a feedback loop of replicated bias might ensue (Pangburn, 2019).
  3. If, based on data, universities strategically market only to desirable candidates, they’ll likely pay more visits and make more marketing efforts to students in affluent areas and those who are likely to yield more tuition revenue (Pangburn, 2019).
  4. When it comes to “data-based” decision-making, it’s easier to get data for white, upper-middle-class suburban kids, and models (for recruiting goals, student success, and graduation outcomes) end up being built on easier data (Koenig, 2020).
  5. Opportunities for profit maximization are often rebranded as bias minimization, regardless of the extent to which that is accurate (Benjamin, 2019)
  6. Data privacy… (Koenig, 2020)

Finally, there’s always the question of human abilities and “soft skills,” and to what extent those should be modified or replaced by AI in any professional field.  There’s no denying the limitations AI and algorithms face in making appropriate contextual considerations.  For example, how does AI account for a high school or for-profit college that historically participates in grade inflation?  How does AI account for additional challenges faced by a lower income or first-generation student? (Pangburn, 2019)  There are also no guarantees that applicants won’t figure out how to “game” data-based admissions systems down the road by strategically optimizing their own data, and if/when that happens, you can bet that the most educated, wealthiest, highest-resourced students and families will be the ones optimizing that data, therefore replicating a system of bias and inequity that already exists (Pangburn, 2019).

As an admissions official at a small, liberal arts institution, I am well aware of the challenges presented to recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding efforts towards equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of Ruha Benjamin in The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism.

References:

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/

Hassan, A. (2019, November 5). 5 takeaways from the Harvard admissions ruling. The New York Times. https://www.nytimes.com/2019/10/02/us/takeaways-harvard-ruling-admissions.html

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong

css.php