Diigo as a tool for collaborative learning and research in higher education

There is significant opportunity within higher education environments–indeed, all education environments–to lean into a constructivist educational philosophy and approach knowledge as something co-created by both instructors and students.  Furthermore, as higher education courses and programs are increasingly offered in hybrid and fully online modalities, finding authentic ways for students to increase their social presence and overall engagement-level in coursework is essential (Baran, 2013). Digital tools can greatly assist in the act of socially constructing knowledge by helping to eliminate learning boundaries and extend opportunities for both formal and informal learning in a myriad of ways (Baran, 2013).  

Within higher education, one of the more important realms of knowledge construction between students and instructors, especially at the graduate level, takes place in the academic research process and in the conversations that, quite literally, take place in the margins within that research process (Farber, 2019).  As a graduate student who currently does not use any specific annotation or research collaboration tools for research outside of Microsoft Teams or Google Suite, I am curious to explore the ways in which the social bookmarking tool Diigo (which allows learners to collect, annotate, organize, and share online resources) supports efficient, collaborative research among instructors, students, and their peers in higher education.  Furthermore, I am interested in anecdotally comparing my exploration of this tool with the functionalities of more recently-released (and decidedly more expansive) digital collaboration tools like Teams and Google+.  Does Diigo hold its own in the digital collaboration tool market in 2021?

According to the product website (Diigo Inc., 2021), Diigo supports collaborative learning endeavors in four key ways.  It allows users to:

  • Collect: save and tag online resources to public or private curated libraries for easy access
  • Annotate: annotate web pages, PDFs, and other digital content directly while browsing online
  • Organize: organize links, references and personal input to create a structured research base
  • Share: share research with friends, classmates, colleagues or associates

Originally released in 2011, Diigo (Digest of Internet Information, Groups and Other stuff) quickly found a dedicated user-base and distinguished itself from other types of bookmarking applications (most notably its 2003 bookmarking predecessor, Delicious) due to its user-friendly interface, emphasis on social engagement, and extensions specific to use in education, combined with more traditional bookmarking functionalities (Ruffini, 2011).  The table below shows a helpful comparison between Diigo, its initial competitor and predecessor Delicious (now defunct), and the typical bookmarking capabilities of a web browser.

Table 1. Social Bookmarking Comparison Chart (updated by Ruffini, 2011)

FeatureDiigoDeliciousBrowser
Organize bookmarks automatically with tagsxxx
Popular bookmarksXXX
Anytime, anywhere access to bookmarksXX 
Share bookmarks with othersXX 
Powerful, customizable search toolsXX 
Groups of people with similar interestsXX 
Post automatically to blogXX 
Tools and browser extensions for bookmarkingXX 
Lists of grouped bookmarksXX 
Free iPhone and Android appsXThird party 
iPad Safari browser bookmarkletX  
Add and share sticky notesX  
Capture, mark up, share images and textX  
Collect web pictures into albumsX  
Sync bookmarksX  
Tools for educatorsX  
Original table by: Schmidt, Jason. (2010, July 30). Diigo and Delicious. Interactive Inquiry. https://iisquared.wordpress.com/2010/07/30/diigo-and-delicious/

As the table indicates, Diigo offers much more functionality in several categories, but most notably (pun intended) in the realm of annotation.  When installed as a browser extension, Diigo can be integrated fairly seamlessly into existing research habits, and perhaps most importantly, most Diigo tools can be accessed for free.

Since Delicious left the scene, new social learning/annotation tools have surfaced such as Hypothesis.is and Mendeley, both of which deliver many of the same features as Diigo, and both of which are largely open access.  Though a detailed direct comparison of these tools is beyond the scope of this post, a brief exploration seems to suggest that Hypothesis might be most appealing for educators who would like to incorporate the application into their Learning Management System (LMS).  Hypothesis can integrate nicely into all of the major LMS platforms and it offers many resources and training videos for educators so that they can truly maximize their use of the tool within their planned learning activities (Guhlin, 2020). Mendeley, a tool primarily intended for use in higher education, has a handy “cite as you write” plugin that prioritizes the streamlining of the reference process, automatically capturing author, title, and publisher information as needed (Guhlin, 2020).

Though it’s now been a decade since its release, Diigo seems to maintain its relevance and dedicated user base for several reasons:

  1. It is a bit simpler and more user-friendly than its competitors such that it is more easily adapted for use in a variety of learning environments and contexts, including both K-12 and higher education (Guhlin, 2020).
  2. Diigo seems to be the tool with the strongest integrated support for educators independent of an LMS.  Since its early stages, special accounts are available for educators that empower registered teachers with a variety of extra tools and features, leveraging the tool for use and collaboration with an entire class if desired (Education Technology, 2015).
  3. Individual features like Diigo Outliner, which lets you create and share digital outlines within a document, add sustaining value to the tool; these features are more nuanced than the general commenting features or Track Changes available in so many other types of collaboration tools (Guhlin, 2020).
  4. Because of its longevity, Diigo has had time to collect a large, lasting user base and dynamic Interest Groups (i.e. K-12 teachers, higher education instructors, researchers, etc.) which offer grassroots professional development tips and organic user insights accessible to the whole community (Ruffini, 2011).  
  5. Diigo’s longevity is also a testament to the creators’ ability to update the tool to best fit user needs over time, and there continue to be product and app updates on a regular basis. Diigo has evolved over the years, and today it is used more frequently for its collaboration/annotation capabilities than the social bookmarking services it was originally focused on (Guhlin, 2020).

Having recently worked on a collaborative research publication using Microsoft Teams, and as a frequent user of collaborative Google products for both academic and personal endeavors, I was curious if this exploration would support Diigo as a stand-alone tool to be considered for use in collaborative research endeavors, or whether its offerings were more or less synonymous with tools embedded within these larger platforms.  Anecdotally, I think the answer is yes, Diigo does stand alone, at least for specific use cases.

Both Teams and Google have many strengths when it comes to video conferencing, and cloud-based word processing and document sharing, but I do not find that these tools go quite so far to aid in the initial research phase. According to Educational Technology and Mobile Learning (2015), with Diigo, student and faculty researchers may:

  • Search for online content relevant to their project, bookmark the websites and then add them to a shared ‘class’ or group
  • Organize bookmarks by tags and date to organize content around a particular topic and to make it easy to search for it later
  • Highlight specific segments of a webpage or add sticky notes to annotate them for others to read 
  • Take screenshots of useful online content and annotate them for use as well

In these cases, Diigo essentially cuts out a step (or multiple steps) for an instructor or student trying to share research. In the initial research phase, a researcher using Diigo would not need to save and download a PDF or copy the link for a website of interest, only to reupload it or paste it later to a general repository, not having the same ability to annotate the resource or organize it as efficiently as Diigo would allow.  However, I do think Diigo finds its strongest value in working for a specific research purpose with a specific group of people.  Its value is inherently collaborative and is best used when trying to co-construct knowledge.  Consequently, it isn’t a tool I’ll be using regularly in my daily academic activities for just myself, but it is a tool I’ll be reaching for when it comes time to spearhead my next research project.

Resources:

Baran, E. (2013). Connect, participate and learn: Transforming pedagogies in higher education. Bulletin of the IEEE Technical Committee on Learning Technology, 15(1), p. 9-12. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.681.1177&rep=rep1&type=pdf

Diigo Inc. (2021). Diigo. https://www.diigo.com/

Education Technology and Mobile Learning. (2015, January 14). 7 ways students use Diigo to do research and collaborative project work. https://www.educatorstechnology.com/2015/01/7-ways-students-use-diigo-to-do-research.html

Farber, M. (2019, July 22). Social Annotation in the Digital Age. Edutopia. https://www.edutopia.org/article/social-annotation-digital-age

Guhlin, M. (2020, April 13). Note-taking and outlining: Five digital helpers. TechNotes. https://blog.tcea.org/note-taking-and-outlining/

Ruffini, M. (2011, September 27). Classroom collaboration using social bookmarking service Diigo. Educause Review. https://er.educause.edu/articles/2011/9/classroom-collaboration-using-social-bookmarking-service-diigo

Schmidt, Jason. (2010, July 30). Diigo and Delicious. Interactive Inquiry. https://iisquared.wordpress.com/2010/07/30/diigo-and-delicious/

Leveraging Digital Tools for Instruction Outside the Classroom: Community Engagement Project, EDTC 6102

It’s been a wonderful opportunity to invest time and energy into a project which ultimately helps me do my job better.  Though I am not currently a classroom teacher and do not have typical instructional responsibilities in my day to day work, I do constantly convey important information about the WA State teacher certification process to prospective educators, a process which can often feel overwhelming and convoluted to many would-be career changers. I feel confident that I’ve put together a blueprint for a meaningful, engaging informational session which will help students navigate the first steps of the certification process with confidence and clarity, ultimately helping them discern for themselves if becoming a teacher is the right step for them professionally at this time, and if so, how to make that a reality. Not only that, I’ve been able to think creatively about how to leverage digital tools to make the session interactive, student-centered, and practically useful to those who attend.

Lesson Plan for 60 Minute Information Session

This lesson plan is intended for a Group of 10-20 prospective graduate students using a virtual teleconference platform like Zoom, Teams, Google Meet, etc.

Note: some hyperlinks may not be accessible to all due to permissions settings

Introduction 
10 min.
Students will be notified of my intention to record the session and their associated rights (turning off video, etc.).

Introduce Self and Learning Objectives of Info Session:
Objective 1: learn how someone becomes a teacher in WA State and determine personal readiness to begin the process.
Objective 2: learn what program options are available at SPU and which program is best-fit for personal context
Objective 3: learn what steps to take next with an application to a teacher certification program

10 Signs That You Should Become a Teacher, opening reflection video
Interactive Presentation
20 min.
Google Slide Deck
Students will be provided with access to the Google Slide Deck in advance of the information session so that they may conduct research ahead of time or follow along independently, clicking on hyperlinks, etc. in their own browser window. I will also use it to structure the presentation of information in the session.  For those who do not choose to follow along independently, hyperlinks for interactive elements will be provided directly in the session chat.

Within the slide deck, students will be introduced to SPU’s various graduate teacher certification program options.Students will be given the opportunity to stop and reflect on what feels like their best-fit program halfway through, before new information about the application process is introduced.  They will also be given the opportunity to ask clarifying questions at this point via Jamboard.

Students will then be given detailed information about application requirements and due dates, including specific information about the endorsement verification process.  This is also a time where I intend to help students understand and navigate the various information systems which they will have to engage with throughout the certification and application process (online application, standardized tests, etc), and to what extent they’ll be expected to provide personal data in digital spaces.
Formative Assessment 10 min.Prospective students will then have the opportunity to review and test their understanding of the material via a brief, 15-question Kahoot Quiz.  This will also serve as a formative assessment tool for me, the instructor, and may help point out areas that need further clarification. There will be time to pause and address these areas while going through the Kahoot Quiz.
Performance Task 
10 min.
Students will then have the opportunity to curate a personal To Do List outlining their next steps towards an application (and thus, towards becoming a teacher).  This is a performance task that helps students indicate their understanding of the material covered in the information session, but it is also meant to be a practical, relevant takeaway for each attendee.

Students will be able to make a copy of this Google Doc Template which contains a scaffolded “word bank” of application requirements.  Students will be able to copy/paste from the word bank in order to create their own, personalized To Do List, paying special attention to their specific program needs, endorsement requirements, and chronological order (i.e. which items need attention first). I will also provide the Google Doc Template in an alternate format (i.e. Word document) for any students that need it after the session. 
Self-Assessment & Reflection
10+ min.
Shortly after providing students with the Google Doc Link, I will provide a link to the final Sticky Note Q & A w/ Jamboard so that students may ask any needed questions while they construct without interrupting the thought processes of others in the session.  They may also choose to drop questions privately to me in the chat depending on the immediacy of the need and/or group appeal of their question. 

After the allotted 10 minutes passes for list construction, I’ll also offer a final few minutes for students to review their lists and ask final questions that need resolving on the Jamboard, OR live in the video conference, time permitting. The session will be recorded and the recording will be provided to students after the information session via email. This allows students to go back and review as needed.

Throughout this session, students will have the opportunity to demonstrate their understanding through application and self-knowledge.

  • Application: as students create their own, personally curated “To Do List” they will be able to apply their understanding of the session material by creating a useful tool that will guide them moving forward.  This includes discerning which information is most relevant to their particular context.  Students will be able to effectively navigate the various information systems which they will have to engage with throughout the certification and application process, especially in regards to the information they provide during the application process.  This also brings to mind the fact that students will literally apply to a program as part of their next steps towards becoming a teacher.
  • Self-Knowledge: students will be invited to reflect on their motivations to become a teacher, whether now is the time to take steps towards becoming a teacher, as well as what kind of program would be best suited for their needs. There will also be ample time for students to grapple with what they do not know, or what is confusing for them in this process.  The decisions they make moving forward will be rooted in the self-knowledge acquired from this session.

I do believe that the most helpful reflections on this “lesson plan” will come after I’ve had the opportunity to put it into action for the first time in a professional setting, likely in Fall of 2021. That said, One area for potential improvement that I can already identify is the curated “To Do List” platform.  Though I found some potential tools that I was interested in which might be a bit sleeker/less cumbersome than using a Google Doc, the ones I came across were not open access or would require a full account set-up in order to use them. This wouldn’t translate well to the timing and context of this particular lesson, nor the student audience (i.e. prospective students attending an information session, not students enrolled in a class).  Thus, for now, I have the Google Doc format as a bit of a place-holder.  I’m quite open to tweaking this section of my lesson in favor of a better tool later on.

In summary, this particular project was a wonderful exercise in thinking about learning and instruction on a macro level and the many ways they take place outside of a formal classroom environment. Digital tools may be leveraged in a myriad of ways to help us do our jobs better, and this was an opportunity for me to think creatively about how to bring that home in my own context. I look forward to using this session format in the next recruiting cycle!

Global research collaboration and the pandemic: How COVID-19 has accelerated networked learning in higher education

Image courtesy of https://www.polyu.edu.hk/web/en/about_polyu/global_network/

According to the National Science Foundation (2019), one out of every five academic research articles are written by authors hailing from more than one country. This fact suggests that the value of international research collaboration was recognized and sought out well in advance of the global COVID-19 pandemic of 2020, but perhaps it’s only just the beginning.  Reasons to pursue global collaboration in higher education include reaching wider audiences and increasing the impact of published research, reducing bias and broadening perspectives with a diverse research team, and leveraging the ability to offset domestic skill shortages by collaborating across national borders (Lee & Haupt, 2020).  Networked learning in higher education can also encourage new levels of creativity and innovation in all kinds of disciplines, and it expands the potential for authentic global and cultural learning experiences in an increasingly connected world (Cronina et al., 2016).  These benefits apply to even the most scholastically “productive” countries like the USA and China (Lee & Haupt, 2020).  That said, understanding that international collaboration in higher education was valued–at least to a certain extent–prior to 2020, I am curious to explore how recent changes in technology and cultural shifts in academia during the pandemic have worked in tandem to build upon this trend, potentially accelerating technology’s impact on global research collaboration and cooperation into the future.

With the onset of COVID-19, scientists and researchers from every corner of the world scrambled to understand the virus and seek a cure. Information sharing among countries quickly became essential, especially for those countries that were hardest hit early on (Lee & Haupt, 2020).  Though socio-political tensions between countries–and even domestically within countries–were hardly in short supply in 2020, the demands of the pandemic shifted priorities such that many international corporations and research institutions began working together rather than competing with each other to produce a vaccine, and large-scale exchanges of medical and public health data, including possible solutions, was (and still is) shared internationally using digital and online tools (Buitendijk, 2020).

When it comes to information sharing, one way of measuring an increase in international collaboration is through a country’s participation in open access publication platforms.  Open access journals and publication platforms remove barriers for accessing information and research since they do not require payment or subscriptions in order to be read and cited.  Sometimes the cost of publishing is absorbed by these open access platforms through philanthropic efforts, sponsorship, or submission fees paid by the authors, etc., but the bottom line is that there’s typically little profit to be made by academics, researchers, and authors who publish in open access platforms. Thus, the motivation for researchers–either individually or nationally speaking–to publish in an open access platform is often more altruistic in nature, placing higher priority on the sharing of information than any potential gains or notoriety received from publishing, monetary or otherwise.  According to Lee & Haupt (2020), countries with lower GDP who were more severely affected by the pandemic were the most likely to increase participation in open access publishing and international research efforts. It would follow that decisions to increase open-access participation was also meant to illicit reciprocal behavior from other countries, and indeed, the majority of all “knowledge producing” countries increased their participation in open access publishing during the pandemic: “For each of the top 25 COVID-19 research-producing countries, there was a noticeably higher proportion of open-access articles on COVID-19 than during the past 5 years and on non-COVID-19…publications during the same period” (Lee & Haupt, 2020, para. 26).

In addition to the public health concerns that have motivated scholars and researchers to participate in more information sharing during the pandemic, it must also be said that the act of collaboration has gotten exponentially easier in recent years.  In a 2010 publication, Iorio et al. discuss the use of digital tools designed to facilitate international collaboration and interaction amongst higher education scholars.  In this specific case, domestic teams in five different areas of the world were attempting to complete an integrative design task which required synchronous virtual meetings, a way to exchange ideas, brainstorm, and problem solve in real time (though not necessarily synchronously), as well as an appropriate digital repository for their work (building plans, model mapping, cost estimates, etc.) which could be accessed frequently by the members of each team in their respective countries.  The article focused its efforts on reviewing the virtual reality platform Second Life. To me, Second Life now feels woefully insufficient as a project management platform, at least according to 2021 standards, but at the time, the authors found Second Life to be a comparatively “appealing choice” due to its options for customization and tools such as virtual white boards, voice and text chat, scheduling agents, etc., all in one centralized, virtual location. Second Life aside, the authors noted that “To date, very few technological options exist that provide all of [the needed] functionalities to distributed networks. Commonly used tools such as email, instant messaging, and teleconferencing do not provide a framework for interaction that fully satisfies the demands of geographically distributed projects.”

Sample of a virtual meeting room in Second Life; image courtesy of https://marketplace.secondlife.com

In short, Second Life was more or less the best this research team could find in 2010. Since then, there’s been a massive influx of virtual project management/collaboration platforms introduced to the market.  Consider the list below of some of the “big names” in collaboration software along with their launch dates:

This list represents nine, powerhouse collaboration platforms, all of which rolled out between 2010 and 2020, and many of which depend heavily on the power and popularity of cloud storage or cloud computing (which was also expanding significantly during this time frame).  And please note: this list is hardly exhaustive.  There are many more out there (and counting!), and even the ones on this list are constantly being updated and expanded.  Each platform or collection of tools on this list boasts its own strengths and weaknesses (the exploration of which is not the point of this post), but there can be no doubt that no matter the platform, it is easier to make the choice to communicate, collaborate, and innovate with people all over the world in 2021 than it was even ten years ago.  Of course, not only have the tools themselves gotten better, but the pandemic has accelerated digital tool adoption for purposes of collaboration at an extraordinary rate, popularizing already existing tools (e.g.. Zoom teleconferencing) in unprecedented ways, such that millions, if not billions, of students and professionals in myriads of settings worldwide are collaborating and problem solving virtually across distances in ways they were not one year ago.

The COVID-19 pandemic has brought into stark realization the fact that we need ‘global solutions to global challenges’ (Buitendijk et al, 2020) and that we need not relegate the phenomena of increased global collaboration in higher education to a particular moment in time.  Instead, we might view this as an opportunity to challenge the model of competition between higher education institutions and place lasting value on diversified bodies of knowledge production, dissemination, and consumption (Buitendijk et al, 2020).  We might also recognize that a philosophy of collaboration makes it possible for students and lecturers in all types of higher education settings to have more equal roles in creating content, sharing resources, and asking/answering important questions (Cronina et al., 2016).  As centers of research all over the world, universities have a crucial role to play in helping humans to better care for one another on a global scale, teaching us to become “more empathic, less competitive, and more networked in our research and educational activities” (Buitendijk, 2020).  Let us not lose the momentum of this moment to embrace a new norm in higher education, maintaining a sincere commitment to, and value of, community-minded research and collaboration across borders.

For further discussion on this topic, consider viewing this 60-minute webinar, “The impact of COVID-19 on University Research and International Collaborations” offered through the UC Berkeley Center for Studies in Higher Education. The webinar was recorded in August of 2020.

References:

Buitendijk, B., Ward, H., Shimshon, G., Sam, A., & Sharma, D. (2020). COVID-19: An opportunity to rethink global cooperation in higher education and research. BMJ Global Health. http://dx.doi.org/10.1136/bmjgh-2020-002790

Cronina, C., Cochraneb, T.,  & Gordonc, A. (2016). Nurturing global collaboration and networked learning in higher education. Research in Learning Technology, 24

Iorio, J., Peschiera, G., Taylor, L., & Korpela, L. (2010). Factors impacting usage patterns of collaborative tools designed to support global virtual design project networks. Journal of Information Technology Design in Construction, 16, 209-230. https://itcon.org/papers/2011_14.content.08738.pdf

Lee, J., & Haupt, J. (2020). Scientific globalism during a global crisis: research collaboration and open access publications on COVID-19. Higher Education. https://doi.org/10.1007/s10734-020-00589-0


National Science Foundation (2019). Publications Output: U.S. Trends and International Comparisons. National Science Board: Science and Engineering Indicators. https://ncses.nsf.gov/pubs/nsb20206/executive-summary

Exemplars of Computational Thinking in Higher Education Classrooms

Though the concepts and theory behind computational thinking (CT) have been around for decades in the realms of computer science and engineering, it is widely acknowledged that Jeanette Wing’s 2006 publication on computational thinking laid the groundwork for CT’s popularity and integration in 21st century education theory.  Wing (2006) suggested that CT might be considered essential to all human endeavors as it is a distillation of a way that we naturally approach solving problems, managing our daily lives, and communicating and interacting with people.  It need not be relegated only to the STEM fields and computer science majors, because CT is not about getting humans to think like computers.  Rather, CT harnesses the natural outpouring of human cleverness, creativity, and problem solving that laid the foundations for the field of computer science in the first place (Wing, 2006). CT is about “…solving problems, designing systems, and understanding human behavior” by drawing on, and leveraging, the concepts fundamental to computer science (Wing, 2006, p. 33).

Though academics continue to debate an authoritative definition for CT, certain common themes are generally accepted characterizations of CT across the board.  These characteristics include:

  • Abstraction — thinking through abstract concepts and ill-defined problems, at times breaking them into smaller, digestible pieces, in order to move towards a more concrete, real-world solution (Wing, 2006).
  • Pattern Recognition —  recognizing useful patterns in data, filtering out the characteristics of patterns that aren’t needed, focusing on those that are (Wing, 2006).
  • Algorithmic Thinking — curating a list of steps that can be followed to solve a problem (Lyon & Magana, 2020).
  • Creative Problem Solving — developing a unique, context-based solution that is  considered original, valuable, and useful (Romero et.al., 2017).
  • Evaluating Solutions — considering the efficacy of a proposed solution to a problem, perhaps making considerations for factors like efficiency and resource consumption (Lyon & Magana, 2020).
Image sourced from https://koneilleci201.wordpress.ncsu.edu/2020/01/28/computational-thinking/

These facets of CT and the related skills are all integral parts of a 21st century education at all levels, including K-12 and postsecondary.  Indeed, “…computational sciences have been deemed essential to maintaining national competitiveness in the workplace and national security.” (Lyon & Magana, 2020, p. 1174)  For these reasons, the fundamentals of CT have been championed in education theory over the last decade, and nationally recognized standards like the Common Core State Standards and ISTE Standards for Students have pointedly emphasized the importance of “21st century skills” in K-12 education while simultaneously offering some clear guidance for what CT can look like in action. 

But what about higher education?  The implementation of CT in higher education classrooms is noticeably harder to call out, especially outside of computer science and engineering classrooms.  In my opinion, this is likely due to a number of factors including, but not limited to:

  1. Lack of collegial collaboration:  higher education disciplines are notoriously siloed. Meaningful integration of CT concepts outside of computer science and engineering programs demands intentional professional development for faculty, as well as interdisciplinary cooperation, both of which can be less accessible in higher education.
  2. Lack of resources: there is relatively little literature available which provides ideas for practical application of CT outside of computer science programs (i.e. coding and computer programming) at the postsecondary level. Additionally, published standards often lean more heavily towards K-12 education.
  3. Questions of applicability: the humanities often resist algorithmic ways of knowing because there is so much value placed on interpretation, subjectivity, and open debate about meaning (Czerkawski & Lyman, 2015).
  4. Just getting started: there is growing interest in translating CT pedagogy into a wide variety of disciplines in higher education (and K-12 for that matter), but the research and discussions are just getting started.  There is much yet to be explored.

Knowing that many STEM instructors in higher education automatically incorporate CT in their approach to teaching and learning because of the nature of their field, and knowing that engaging with CT in courses devoted to coding and programming are already integral to computer science and engineering majors, I seek to offer a few alternative examples of CT as it has been used to enhance teaching in learning in other kinds of higher education environments:

  • In a professional writing course taught at the undergraduate level, CT was used to systemize the writing process.  It called for a “deconstructive approach, breaking down the task of structured authoring into multiple layers of abstraction, and teaching each layer independently.” (Lyon & Magana, 2020, p. 1182)
  • In the fine arts, CT can be used as a tool to enhance creativity.  In one example, CT was used to create an organized system for tracing the origins of musical composers, which in turn inspired new creative endeavors based on the organized data. “…Algorithmic composition in music is, effectively, a human-computer collaboration–the computer serving as a tool that extends the composer’s ability to explore new musical ideas” (Edwards, 2011, p. 67)
  • The Stanford Literary Lab famously applied CT via Graph Theory to perform a “network analysis of character relationships and interactions” in a series of Shakespeare’s plays (Czerkawski & Lyman, 2015).
  • In the life sciences, CT has been used to inform systems theory and how to teach and understand biological processes, such as genetics, in an organized, logical fashion (Czerkawski & Lyman, 2015).
  • Utilizing a process dubbed “creative programming,” instructors may engage learners in the process of designing and developing an original work through coding. In this collaborative approach, learners are encouraged to co-construct knowledge in an interdisciplinary way. Examples might be to have students in a history course (co-)create a rendering of a city at a given historical period, or to present a traditional story in a visual programming tool like Scratch. In this kind of activity, learners must use skills and knowledge in mathematics, technology, language arts, and social sciences. (Romero et. al, 2017, para. 3)

Modeling and simulation activities are excellent examples of CT at work, and these types of learning activities can certainly extend themselves to many types of fields and disciplines.  Consider a learning activity where a group of undergraduate philosophy majors create a simulated narrative presentation wherein a human “character” makes a series of daily choices based on their moral philosophy or framework–almost like a “Choose Your Own Adventure” novel meets systems theory within one, or multiple, philosophical frameworks.  The simulation itself could be a computer-based product (or not), but regardless, the learning activity would draw upon many tenets of CT while also demonstrating in-depth knowledge of the discipline-specific subject matter.

All fields and disciplines require problem solving in some form.  Thus, it is reasonable to assume that CT may be useful in expanding the human ability to effectively problem solve in all fields.  In one study comparing the use of CT by an undergraduate computer science student and an art student, the researchers found that both students “…used various CT skills when solving all [italics added] problems, and the application of CT skills was influenced by their background, experiences, and goals.” (Febrian et. al., 2018, para. 1)  Regardless of training, background, or chosen major, CT enables postsecondary students to become more efficient problem solvers in all areas of life, teaching them to recognize computable problems and approach the problem-solving process as skillfully as possible (Czerkawski & Lyman, 2015).

References

Czerkawski, B.C. & Lyman, E.W. (2015). Exploring issues about computational thinking in higher education. TechTrends 59(2), 57–65. https://doi.org/10.1007/s11528-015-0840-3 

Edwards, M. (2011). Algorithmic composition: Computational thinking in music. Communications of the ACM, 54(7), 58-67. doi:10.1145/1965724.1965742  

Febrian, A., Lawanto, O., Peterson-Rucker, K., Melvin, A., & Guymon, S. E. (2018). Does everyone use computational thinking?: A case study of art and computer science majors. Proceedings of the ASEE Annual Conference & Exposition, 1–16.

Lyon, J. & Magana, A. (2020). Computational thinking in higher education: A review of the literature. Computer Applications in Engineering Education 28(5), 1174-1189. https://doi.org/10.1002/cae.22295

Romero, M., Lepage, A., & Lille, B. (2017). Computational thinking development through creative programming in higher education. International Journal of Educational Technology in Higher Education 14(42). https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-017-0080-z

Wing, J. (2006). Viewpoint: Computational thinking. Communications of the Association for Computing Machinery (49)3, 33-35. https://dl.acm.org/doi/fullHtml/10.1145/1118178.1118215?casa_token=DY3JiA-SOKMAAAAA:OYN4CIuf3LvuR1v4IiYsKQ-2J1KQMV6e0k6skWtib9uI02IKHhX9fEEA7rQC459Lk39QGworokaU 

The Changing Nature of Research in Higher Education

Research in higher education looks very different today than it did even ten years ago.  Academics who, not so very long ago, were well acquainted with physical library study spaces and large collections of peer-reviewed academic journals, find themselves in a digitized world of research with unprecedented access to information and virtual repositories of human meaning-making activity.  The nature and culture of research in higher education is shifting, including that which is considered “worthy” content to explore when conducting research in all kinds of disciplines.  One need look no further than the APA reference guide and the ever-expanding list of possible resources (e.g.. YouTube videos and TED talks, podcasts, blog posts, etc.) to note that the “rules” of research are expanding, and must expand, alongside our access to information.

jesadaphorn/Shutterstock.com

As a current doctoral student (and someone who received my initial graduate degree a decade ago), I am curious about the ways the research sector of higher education has changed over time. How are undergraduate students being taught to conduct research?  What kind of shifts have been made due to new tools and technology platforms that assist in the research process?  What cultural shifts are happening in graduate and doctoral programs, and are these cultural shifts impacting research strategy? 

Jonbloed et al. (2008) posit that higher education has an expanding set of stakeholders and thus a continually shifting societal expectation of what a university’s public obligation is.  Early universities provided education exclusively for clergy and societal elites, but over the centuries, higher education has been democratized such that there are many invested parties and participants with competing paradigms and priorities. Indeed, one of the major, ongoing, accelerated shifts in higher education is the diversification of students, staff, and faculty and the role that universities can/should play as advocates of–and vehicles for–social justice (Brennan & Teichler, 2008).  We also now live in a “knowledge society” where knowledge is considered the solution to everything and the key to personal and societal advancement (Jonbloed et al., 2008).  Thus, higher education institutions (HEIs) are driven to make teaching and research more publicly accountable, often restructuring programs and creating new ones to meet modern societal demands and forfeiting, or “reorienting,” long standing academic norms and values along the way (Jonbloed et al., 2008).

Even the doctorate degree, a terminal, research-based degree program which is typically the highest academic degree that can be awarded by a university, isn’t immune to change.  There is an increasing demand for doctoral programs to become more relevant, to produce academics with transferable skills in their field in addition to research skills, and to even be more sensitive to issues of employability that extend beyond creating new academics who scarcely step outside the “ivory tower” of a university campus (Park, 2005).  This requires attention to the course structure and modality of a doctoral program, the quality of the mentorship provided, the diversity of students within the program, and an expansion of that which is considered sufficient, valuable evidence of research contributions in a given field.

At the undergraduate level, much focus is given to the development of research skills as a form of information or digital literacy.  K-12 schools and districts across the United States differ greatly in their approach to teaching digital literacy skills.  Thus, undergraduate students at HEIs come into lower division classes with a wide range of background and abilities (or lack thereof) informing their approach to research.  In a case study conducted at Texas Christian University (TCU) by Huddleston et al. (2019), faculty were surveyed to determine what research skills they felt were most needed and valuable for undergraduate students to have, and which skills undergraduate students tended to struggle with most.  A list of nine core skills for research success was produced based on faculty responses:

  1. Topic selection
  2. Search strategy
  3. Finding resources
  4. Differentiating source types
  5. Evaluating sources
  6. Synthesizing information
  7. Summarizing information
  8. Citing sources
  9. Reading and understanding citations

Perhaps unsurprisingly, faculty overwhelmingly felt that the skill they most wanted students to master by the time they graduated was the ability to critically evaluate information and sources.  This was, however, also found to be the weakest skill that undergraduate TCU students possessed, and that they were least likely to be able to do at a satisfactory level upon graduation (Huddleston et al., 2019).  It is no coincidence that the ability to think critically about an information source is needed now more than ever due to the overwhelming amount of information and sources available on the world wide web.  While access to valuable, credible sources of information expands, students need to be able to recognize “worthy” material in dynamic ways which allow them to differentiate their source types appropriately.  Certainly not all valuable research material is limited to the contents of academic journals, but neither is every blog post worthy of scholarly consideration. In this case study, Huddleston et al. (2019) note that the university library/librarians are important resources and guides when it comes to information literacy instruction, and a number of suggestions were made to help increase the visibility of librarians at the department level, leveraging their knowledge and training alongside faculty in a collaborative approach to teaching undergraduates needed research skills.

There is no denying that a certain level of digital and informational literacy is essential in all areas of higher education given that “research outputs across the academic disciplines are almost exclusively published electronically,” and therefore “organizing and managing these digital resources for purposes of review…are now essential skills for graduate study and life in academia.” (Lubke et al., 2017, p. 285) Of course, in the year 2021, there are also a myriad of digital tools available that not only assist in the research process, but make it easier to practice information literacy and grow a researcher’s individual technical savvy. Assuming the literature review (i.e. research paper) is the most frequent research-based activity conducted in higher education, especially at the graduate level, Lubke et al. (2017) propose a simple, 3-step framework which can become the essential workflow for a paperless research project.

Lubke et al. (2017)

As the image suggests, stage one begins with selecting a digital tool to store and analyze sources.  Some suggested platforms include Zotero, EndNote, F1000 Workspace, RefWorks, and Mendeley.   Each tool has its own strengths and weaknesses, but generally speaking, each is an example of a digital tool that assists researchers in methodically storing and organizing possible source material for consideration, both in the current research process and for possible future use (e.g. dissertation).  Once sources have been selected and stored, researchers may then move to stage two where they may read, annotate, and analyze their sources.  This is where weak sources may be removed from consideration and where important pieces of information are mined and commented on in preparation for creating an academic argument (Lubke at al., 2017). In the annotation phase, digital tools like GoodReader can be used to take notes and highlight a text; then, annotated versions of sources may be saved separately from the original.  Finally, in stage three, researchers may choose to employ Qualitative data analysis software (QDAS) like QSR NVivo to synthesize themes and pull together information from across sources, ultimately drawing conclusions for publication.

The nature of research in higher education–and really, higher education itself–has changed drastically over the course of the last couple of decades.  Higher education is expanding in its scope and purpose, and there is increasing demand for academic research to have immediate, practical value. When conducting research, the most frequent problem faced by students and academics at all levels is what to do with the vast amounts of information we now have access to: how to source it, organize it, and analyze it critically.  Direct instruction in digital and information literacy continues to be a need in postsecondary education (both undergraduate and graduate), but there are a number of tools available that can be powerful aids in the research process, expanding our knowledge base and extending our capacity to think critically about sources, thus also expanding our potential for innovation.  There is no doubt that the nature of research will continue to evolve alongside the digital world…are we ready to consider the possibilities?

References

Brennan, J. & Teichler, U. (2008).  The future of higher education and of higher education research. Higher Education, 56(3), p. 259-264. https://doi.org/10.1080/13583883.2003.9967102

Huddleston, B., Bond, J., Chenoweth, L., & Hull, T. (2019). Faculty perspectives on undergraduate research skills: Nine core skills for research success. Reference & User Services Quarterly (59)2, pp. 118-130. 

Jonbloed, B., Enders, J., & Salerno, C. (2008). Higher education and its communities: Interconnections, interdependencies and a research agenda. Higher Education, 56(3), p.303-324.  https://doi.org/10.1080/13583883.2003.9967102

Lubke, J., Britt, V., Paulus, T., & Atkins, D. (2017).  Hacking the literature review: Opportunities and innovations to improve the research process. Reference and User Services Quarterly (56)4, p. 285-295.

Park, C. (2005). New variant PhD: The changing nature of the doctorate in the UK. Journal of Higher Education Policy and Management 27(2), p.189–207. https://www.tandfonline.com/doi/abs/10.1080/1360080050012006

Assessment in higher education during COVID-19 and beyond: Will it ever be the same?

Ridofranz/Getty Images

Perhaps the word “unprecedented” has been overused in recent months, but it consistently seems to be the most fitting word to express the seismic shifts in all areas of life that have occurred during the COVID-19 pandemic. As K-12 and higher education institutions worldwide have grappled with rapid pivots to online teaching and learning (and have continued in these blended and fully remote modalities for much longer than anticipated), academics are now taking a moment to reflect on the past year and its lasting implications for the world of education.  After all, as business theory would posit, disruption leads to innovation.

As an educator interested in online teaching/learning in post-secondary education, I would specifically like to explore how the last year of remote learning has impacted assessment strategies in higher education. How have widespread shifts to online teaching/learning impacted college students’ abilities to demonstrate competency in varied and student-driven ways?  Higher education is notoriously “old school,” and post-secondary classes are most frequently  lecture-based, led by instructors who are slower to adapt to more progressive, student-driven pedagogies.  And yet, as universities across the U.S. have made college and graduate school entrance exams like the SAT, ACT, and GRE optional for applicants in 2020 and 2021, a world beyond high stakes standardized testing can perhaps be imagined now more than ever.

Higher education instructors worldwide are already engaging in this issue and offering recommendations for ongoing change.  Perhaps surprisingly, some of the first publications I encountered came from educators in the graduate medical school community in both Australia and Pakistan. This was particularly striking since the medical sciences require lab work and clinical assessments which are particularly challenging to address in remote situations, as well as the fact that the medical sciences have long required high stakes testing at many stages of a medical student’s training.

According to Torda (2020), many medical school instructors in Australia have moved to lower the stakes of traditional written or multiple choice exams delivered online during the pandemic.  At the same time, a shift has been made to put more weight on multi-sourced feedback and student portfolios.  Where clinicals are concerned, simulation platforms such as the OSPIA (Online Simulated Patient Interaction and Assessment) system have been leveraged to bridge the gap until in-person clinicals may safely resume.  Another significant shift has been an emphasis on measuring a student’s ability to exhibit key professional skills, known in the medical community as “Entrustable Professional Activities,” over and above written examinations (Torda, 2020)..  In other words, students are being assessed on their ability to apply their learning in professionally relevant contexts.  Some of these skills include, but are not limited to, recommending and interpreting common diagnostic and screening tests, providing a (virtual) oral presentation of a clinical encounter, forming clinical questions and retrieving evidence to advance patient care, and collaborating with professional colleagues (Torda, 2020).  It was noted that, taken as a whole, these measures go a long way toward easing test anxiety and motivations to cheat in an otherwise high stakes, demanding field of study (Torda, 2020).

Additional examples of altered assessment strategies in the medical community have been reported in Pakistan.  Similar to Torda (2020), Khan and Jawaid (2020) posit that the pandemic has necessitated lowering the stakes of online-proctored, traditional exams.  The authors advocate for the use of student portfolios and video evidence of professional tasks completed, as well as synchronous open book exams.  The authors note that the aim of synchronous open book exams “…is to assess the ability of students to analyze and solve a problem, [and to] assess critical thinking and creativity. With open book exams taken in real time, the issues of cheating can be minimized.” (Khan & Jawaid, 2020, p. 109)

Changes in higher education assessment have also been reported in the United States. In June of 2020, Natasha Jankowski, in partnership with the National Institute for Learning Outcomes Assessment (NILOA), spearheaded a higher education survey meant to capture “a snapshot of assessment-related changes made during Spring 2020 in response to the sudden shift to remote instruction…” (Jankowski, 2020, p. 3). The survey included responses from faculty and staff at 624 different institutions, both public and private, with representation from all 50 states. The survey sought to record learning changes that higher education instructors were making, the impacts of those changes on assessment culture, and the role of student voice in the decisions (Jankowski, 2020).  The survey results showed that 97% of respondents made learning, instructional, and assessment changes of some kind during Spring 2020. Changes included modifying assignments and assessments, allowing flexibility in assignment deadlines, shifting to a pass/fail grading model, and modifying assessment reporting deadlines.  Though some respondents made changes that included accepting alternative assignments, this was a less often made change (Jankowski, 2020).  The survey also showed “…that assessment-related changes were undertaken to address student needs” (p. 3).  However, these changes may have had more to do with faculty/staff perception of student needs as opposed to action taken in direct response to student reports: “Information gathered from students was less likely to influence decisions on what to change, and students were less likely to be asked to identify their needs prior to decisions being made.” (p. 3) Consequently, it might be hard to define many of these changes in assessment as authentically “student-driven.” 

Nevertheless, it seems that the pandemic has disrupted “business as usual” in higher education such that many of the changes reported above may in fact have lasting impact with increasing opportunity for student voice to take a front seat in decision-making.  Dr. Funmi Amobi of Oregon State University’s Center for Teaching and Learning puts forth compelling arguments in favor of  “reimagining” assessment in higher education in light of the lessons we’ve learned in the pandemic (Amobi, 2020).  Amobi asserts that the radical move to remote instruction has “refocused attention on improving assessment practices to alleviate student stress and anxiety, emphasize learning, and redress inequities in student success.” (par. 2)  The author goes further and provides seven practical strategies for reimagining assessment in higher education.  Though these strategies can certainly be used effectively in remote learning environments, they are not only meant to solve problems related to online teaching and learning.  The strategies presented by Amobi (2020) should be taken seriously by all higher education instructors wanting to diversify their approach to assessments and create more student-centered learning experiences: 

  1. Use short, weekly quizzes to assess students formatively, and consider making the quizzes cumulative so that they may contribute to a summative assessment score.
  2. Ask for justification on multiple choice tests and grade the response instead of the answer.
  3. Create opportunities for collaborative, group tests.
  4. Have students construct exam questions themselves as a way of reviewing and exercising higher order thinking skills; then, include many of the student questions on the exam.
  5. Allow for notes or a study card and have students submit the prepared materials for credit along with the actual exam.
  6. Utilize practice tests.
  7. Spend time reviewing exams to address misunderstandings and improve future performance; consider giving credit for thoughtfully corrected exams where learning is evident.

In each of the reviewed publications, certain recurring themes were readily apparent: 1) it may be high time for colleges and universities to rethink the value of high stakes testing 2) varied assessment strategies allow for a more effective presentation of student learning 3) assessment is part of the overall learning process and should not be divorced from student voice 4) varied assessment strategies reduce test anxiety and the motivation to cheat (the ladder being oft-cited as a obstacle in online assessment). 

We must avoid the underlying assumption that more technology is needed in order to solve the problems that technology introduces.  In other words, as the pandemic continues to require extended engagement in remote teaching, higher education instructors must not assume that the only way to assess online is to find a way to virtually proctor the same exam that would normally be given in a physical classroom (Kumar, 2020).  Instead, educators at all levels may take this opportunity to make meaningful changes to their use of assessments, both now and into the future, thinking critically and creatively about how to best meet students where they’re at.  

References:

Amobi, F. (2020, November 12). Reimagining assessment in the pandemic era: Comprehensive assessment of student learning. OSU Center for Teaching and Learning. https://blogs.oregonstate.edu/osuteaching/2020/11/12/reimagining-assessment-in-the-pandemic-era-comprehensive-assessment-of-student-learning/

Jankowski, N. A. (2020). Assessment during a crisis: Responding to a global pandemic. National Institute for Learning Outcomes Assessment. https://public.uhcl.edu/education/centers-initiatives/planning-assessment/documents/niloa-covid-assessment-report.pdf

Khan, R. A. & Jawaid, M. (2020). Technology Enhanced Assessment (TEA) in COVID 19 Pandemic. Pakistan Journal of Medical Sciences 36, 108-110. 10.12669/pjms.36.COVID19-S4.2795

Kumar, R. (2020). Assessing higher education in the COVID-19 era.  Brock Education Journal 29(2), 37-4. https://journals.library.brocku.ca/brocked

Torda, A, (2020). How COVID‐19 has pushed us into a medical education revolution. Internal Medicine Journal 15(9), (1150-1153).  https://doi.org/10.1111/imj.14882

Digital Learning Mission Statement

As a developing leader in the digital education space, it’s vital to understand and articulate what guiding principles and values will shape my priorities and research in the present and also guide my work in the future.  These same values should, of course, be reflected in my own digital footprint in as much as they inform my approach to leadership.    

As a digital citizen advocate (ISTE Standard 7) and admissions official in higher education, I hope to elevate and address issues of access and equity, champion authenticity and integrity in digital spaces, and empower students, faculty, and staff to hold themselves accountable for their actions, roles and responsibilities in digital spaces.   

Access:   

Issues pertaining to access and equity in the higher education admissions world have long persisted, but they’ve been especially well-documented in recent years as controversy after controversy has made headlines.   

In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not, however, limited to misguided actions of wealthy, overzealous parents; it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019).  This event—and others like it—highlight the fact that admissions decisions are not always objective and merit-based, and that those with the resources to game the system often do.  

Unfortunately, there are many ways in which bias and inequity infiltrate the admissions process and undermine the accessibility of a high-quality college education.  Standardized tests like the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions come with their fair share of concerns.  Research has oft-revealed how racial bias affects test design, assessment, and student performance, thus bringing biased data into the admissions process to begin with (Choi, 2020).    

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al., 2020).  

It should go without saying that higher education institutions (HEIs) must seek to implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies.  Though technology may not—and likely should not, on its own—be able to offer a comprehensive solution to admissions bias, there are certainly some digital tools that, when utilized thoughtfully by higher education admissions professionals, can assist in the quest to prioritize equitable admissions practices, addressing challenges and improving higher education communities for students, faculty, and staff alike (ISTE standard 7a).  

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to AI and algorithms for assistance in many aspects of the admissions and recruiting process.  Platforms such as the Education Cloud by Salesforce “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020).    

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that predictive tools must also account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow situationally and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   

Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al. (2020) study suggest that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings (Alvero et al, 2020).   

Regardless of its specific function in the process, AI and algorithms have the potential to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.  

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of higher education admissions, it’s still wise to think critically about the function of AI and algorithms in order to ensure they’re helping more than they’re hurting.  There is a persistent and reasonable concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias.  As author Ruha Benjamin states in Race After Technology: Abolitionist Tools for the New Jim Code (2019), “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12) Thus, there is always a balance to be struck where technology is concerned. 

As an admissions official at a small, private, liberal arts institution, I am well aware of the challenges presented to HEI recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism in order to allow tech tools to reach their fullest potential in helping address issues of access, equity, and bias in higher education admissions (ISTE standard 7a).  

Authenticity:   

Anyone with a digital footprint has participated in creating—knowingly or unknowingly—a digital identity for themselves.  Virtual space offers people the freedom “…to choose who they want to be, how they want to be, and whom they want to impress, without being constrained by the norms and behaviors that are desirable in the society to which they belong” (Camacho et al., 2012, p. 3177).  The internet provides new opportunities for spaces for learning, working, and socializing, and all of these spaces offer opportunities for identity to be renegotiated (Camacho, 2012). It is a worthy—if not simple—endeavor to pursue authenticity and integrity within any kind of digital identity, digital media representation, or online interaction.  

Interactive communication technologies (ICTs) complicate meaningful pursuits of authenticity. These digitally-mediated realms of human interaction challenge what we see as authentic and make it harder to tell the difference between what is “real” and what is “fake” (Molleda, 2010).  One need look no further than “reality” TV, social media personas, and journalistic integrity in the era of “fake news” to understand that not all claims of authenticity in media are substantive.   

This does not mean, however, that authenticity in digital space fails to have inherent value or is impossible to achieve.  An ethic of authenticity goes beyond any kind of plan, presentation, or strategic marketing campaign; authenticity is about presenting the essence of what already exists and whether (or not) it has the ability to live up to its own and others’ expectations and needs (Molleda, 2010).  Exercising authenticity includes making informed decisions about protecting personal data while still curating the digital profile one intends to reflect (ISTE Standard 7d).  Exercising authenticity also contributes to a wise use of digital platforms and healthy, meaningful online interactions (ISTE standard 7b).  

In a comprehensive literature review, Molleda (2010) found that several pervasive themes, definitions, and perceptions of authenticity consistently surfaced across a variety of disciplines.  Taken as a whole, Molleda (2010) asserts that these claims may be used to “index” or measure authenticity to the extent that they are present in any given communication or media representation.  Some of these key aspects of authenticity include:  

  1. Being “true to self” and stated core values  
  2. Maintaining the essence of the original (form, idea, design, product, service, etc.)  
  3. Living up to others’ expectations and needs (e.g. delivering on promises)  
  4. Being original and thoughtfully created vs. mass produced  
  5. Existing beyond profit-making or corporate/organizational gains  
  6. Deriving from true human experience  

Molleda (2010) concludes that consistency between “the genuine nature of organizational offerings and their communication is crucial to overcome the eroding confidence in major social institutions” (p. 233). I for one hope to continue embodying an ethic of authenticity in both my personal and professional work—in digital spaces and otherwise—in order to set the stage for that consistency and to bolster societal confidence in the institution I’m a part of.  

Accountability  

The digital realm is an extended space where human interactions take place, and the volume of interactions taking place in digital spaces is only increasing.  As with any segment of human society, human flourishing, creativity, and innovation takes place in spaces where people feel safe and invested in the community in which they find themselves. Thus, as a digital leader, it is important to empower others to seriously consider their individual roles and responsibilities in the digital world, inspiring them to use technology for civic engagement and improving their communities, virtually or otherwise (ISTE standard 7a).  

Humans do not come ready-made with all of the savvy needed to engage with media and online communications in wise ways.  Media literacy education is needed to provide the cognitive and social scaffolding that leads to substantive, responsible civic engagement (Martens & Hobbs, 2015).  Media literacy is also a subset of one’s own ethical literacy, which is the ability to articulate and reflect upon one’s own moral life in order to encourage ethical, reasoned actions.  As a digital citizen advocate and marketing professional, I hope to support educators in all kinds of contexts—both personal and professional—to examine the sources of online media, be mindful consumers of online content, and consistently identify underlying assumptions in the content we interact with (ISTE standard 7c).  In an effort to be digitally wise and to “beat the algorithm” in digital spaces (both literally and metaphorically speaking), we must self-identify potential echo chambers and intentionally seek out alternative perspectives.  This requires a commitment to media literacy education in all kinds of formal and informal environments.    

Media literacy education equips educational leaders and students to foster a culture of respectful, responsible online interactions and a healthy, life-giving use of technology (ISTE standard 7b).  According to Hobbs (2010), media literacy is a subset of digital literacy skills involving:  

  1. the ability to access information by locating and sharing materials and comprehending information and ideas  
  2. the ability to create content in a variety of forms, making use of digital tools and technologies;   
  3. the ability to reflect on one’s own conduct and communication by applying social responsibility and ethical principles; (ISTE standard 7b)  
  4. the ability to take social action by working individually and collaboratively to share knowledge and solve problems as a member of a community; (ISTE standard 7a)  
  5. the ability to analyze messages in a variety of forms by identifying the author, purpose, and point of view and evaluating the quality and credibility of the content. (ISTE standard 7c)   

In a study conducted with 400 American high school students, findings showed that students who participated in a media literacy program had substantially higher levels of media knowledge and news/advertising analysis skills than other students (Martens & Hobbs, 2015).  Perhaps more importantly, information-seeking motives, media knowledge, and news analysis skills independently contributed to adolescents’ proclivity towards civic engagement (Martens & Hobbs, 2015), and civic engagement naturally requires dialogue with others within and outside of an individual’s immediate circle.  In other words, the more students were able to critically consider the content they were consuming and the motives behind why they were consuming it, the more they wanted to engage with alternative perspectives and be active, responsible, productive members of a larger community (ISTE standard 7a).  

This particular guiding principal is large in scope; it’s importance and relevance isn’t limited to a specific aspect of my professional context as much as it helps define an ethos for all actions, communication, and consumption that takes place in the digital world.  In order to hold ourselves accountable for our identities and actions online we must exercise agency.  The passive internet user/consumer is the one most likely to get caught in an echo chamber, develop destructive online habits, and communicate poorly in virtual space. The digitally wise will make consistent efforts to challenge their own thinking, create safe spaces for communication, intentionally seek out alternative voices, and actively reflect on their contributions to an online community, ultimately making digital spaces a little bit better than when they found them.  

References:  

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871.  

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.  

Camacho, M., Minelli, J., & Grosseck, G., (2012). Self and identity: Raising undergraduate students’ awareness on their digital footprints. Procedia Social & Behavioral Sciences, 46, 3176-3181.  

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing  

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/  

Hobbs, R. (2010). Digital and media literacy: A plan of action. The Aspen Institute Communications and Society Program & the John S. and James L. Knight Foundation. https://knightfoundation.org/reports/digital-and-media-literacy-plan-action/  

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in  

Martens, H. & Hobbs, R. (2015). How media literacy supports civic engagement in a digital age.  Atlantic Journal of Communication, 23, 120–137.  

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0  

Molleda, J. (2010). Authenticity and the construct’s dimensions in public relations and communication research. Journal of Communication Management,14(3), 223-236. http://dx.doi.org.ezproxy.spu.edu/10.1108/13632541011064508   

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong  

Student Flourishing in the Virtual Classroom

Image Source, Medium.com

There is no doubt that the COVID-19 pandemic has greatly accelerated the rate at which schools and universities of all shapes and sizes have had to move to online teaching and learning modalities, even if only as a short-term conduit for allowing formal education to continue in these unprecedented times.  There is also no doubt that this emergency shift to online teaching has left many concerned about overall student well-being including screen fatigue, issues of access and equity, teacher readiness, social-emotional support in a digital environment, and the overall efficacy of the educational endeavor for students of all ages in digital mediums.  Is there a light at the end of the tunnel?  Or might there already be some twinkle lights strung up along the tunnel walls guiding the way? 

In this post I’d like to explore some of the evidence that already exists in support of student flourishing—particularly at the postsecondary level—in hybrid or fully online programs, as well as what best practices can be used to support student well-being in all online teaching/learning endeavors, during COVID-19 and beyond.  Thankfully, the pandemic didn’t bring about the dawn of online pedagogy in higher education, and postsecondary educators have places to turn in order to think critically (and perhaps hopefully) about student success and well-being, be it academic or personal, in the digital classroom.

Evidence of Flourishing:

Few would argue that an in-person classroom experience can be identically replicated online.  In fact, those who attempt to do so have probably done so with disappointing results.  But perhaps educators shouldn’t necessarily be trying to replicate a physical classroom experience in an online environment.  Rather, they should think of the virtual classroom as a new endeavor; it is a new context with new possibilities to explore, and online pedagogy may bring new teaching/learning benefits to the table that a physical classroom lacks. 

Indeed, there’s evidence to suggest that a hybrid of in-person and online teaching may be the very best approach to postsecondary learning—with or without a pandemic—as it capitalizes on the “best of both worlds.”  In an extensive, multi-year case study conducted at the University of Central Florida in 2004, research showed that student success in blended programs (success being defined as achieving a C- grade or higher) actually exceeded the success rates of students in either fully online or fully face-to-face programs (Dziuban et al, 2004).  Furthermore, in a meta-analysis of studies on online and hybrid learning conducted by the U.S. Department of Education in 2010, it was reported that students in online and hybrid learning programs had more gain in their learning when compared to face-to-face modalities, and students in hybrid learning courses had the largest gains in their learning among their peers in all delivery formats (Means et al., 2010).  In yet another study (Chen & Chiou, 2014) measuring the learning outcomes, satisfaction, sense of community and learning styles of 140 second-year university students in Taiwan, results showed that students in a hybrid course had significantly higher scores and overall course satisfaction than did students participating in face-to-face courses. The results also indicated that students in hybrid learning classrooms actually felt a stronger sense of community than did students in a traditional classroom setting (Chen & Chiou, 2014).

While one must make many allowances for the various emergency situations brought on by the pandemic (and that there is a distinction between emergency remote instruction and true online teaching/learning), there is plenty of evidence to suggest that well-implemented online teaching/learning can truly enhance student learning beyond what might otherwise be accomplished in a fully face-to-face environment.

Some Best Practices in Online Instruction:

Technology-mediated education is making it possible for students to participate in programs, access content, and connect in ways they were previously unable to.  Rather than viewing the Internet as a necessary evil for distance learning that ultimately begets isolated student learning experiences, digital education should, first and foremost, be connective and communal.  This means a professor accustomed to lecture-based learning in a physical classroom will need to consider a new approach in order to prioritize student voice in the learning process.  In an online context, this means there should be dynamic opportunities for students to engage in debate, reflection, collaboration, and peer review (Weigel, 2002).

If educators are going to seriously account for the rich background experiences, varied motivations, and personal agency of their postsecondary learners, they must also take into account the larger “lifewide” learning that takes place within the lives of their students (Peters & Romero, 2019). Student learning at any age is both formal and informal, and what takes place in a formal classroom environment—digital or otherwise—is influenced by informal learning and daily living that takes place outside of it.  If deep learning takes place, a student’s world and daily life should be altered by the creation of new schemas and the learning that has taken place in a formal classroom environment.  In a multicase and multisite study conducted by Mitchell Peters and Marc Romero in 2019, 13 different fully-online graduate programs in Spain, the US, and the UK were examined in order to analyze learning processes across a continuum of contexts (i.e., to understand to what extent learning was used by the student outside of the formal classroom environment).  In this study, certain common pedagogical strategies arose across programs in support of successful student learning and engagement including:

  1. Developing core skills in information literacy and knowledge management,
  2. Community-building through discussion and debate forums,
  3. Making connections between academic study and professional practice,
  4. Connecting micro-scale tasks (like weekly posts) with macro-scale tasks (like a final project), and
  5. Applying professional interests and experiences into course assignments and interest-driven research.

(Peters & Romero, 2019).

In many regards, each of these pedagogical strategies is ultimately teaching students to “learn how to learn” so that the skills they cultivate in the classroom can be applied over and over again elsewhere. This means that, where digital learning is concerned, the most important learning activities aren’t actually taking place in a large, synchronous Zoom meeting or broadcasted lecture series.

On a practical level, educators can also give attention to some of these simple “tricks of the trade” that have been proven to enhance student learning experiences in a virtual classroom:

  1. Communicate often with students to promote a feeling of connectedness
  2. Create ample space for student voice
  3. Take care that a course set-up in a learning management system is intuitively laid out, action oriented, and adaptable to student needs
  4. Give timely feedback and highlight student strengths
  5. Create opportunities for synchronous activities when possible
  6. Be explicit about expected course outcomes

(Vlachopoulos & Makri, 2019)

At the end of the day, learning and schooling no longer have the same direct relationship they had for most of the 20th century; devices and digital libraries allow anyone to have access to information at any time (Wilen, 2009). Schools, teachers, and printed books no longer hold the “keys to the kingdom” as sources of information.  Online education, then, will not function effectively as a large-scale effort to teach students information through a standardized curriculum.  Rather, education must be a highly relevant venture that enables individual students to do something with the virtually endless information and resources they have access to (Wilen, 2009).

Student Agency & Connection Lead to Student Wellbeing:

When considering how to best support student wellbeing in an online learning environment (at every level), it’s important to remember that the student is not a passive entity.  Indeed, the extent to which students are able to exercise agency in their learning can have a significant impact on their academic success, their attitude towards the learning experience, and their social-emotional wellbeing.  In this case, agency can be interpreted as a student’s ability to exercise choice and be meaningfully present and interactive in the online learning environment.

One of the significant benefits of learning management systems and digital classrooms is the existence of a platform through which resources and learning materials can be shared and posted for any length of time.  Thus, students have the ability to review online course materials at their own pace and engage at a rate that makes sense for their individual needs (Park, 2010).  Allowing students the time and space to persist in completing online learning activities can have significant impact on a students’ success in an academic course (Park, 2019).

Additionally, game-based learning activities, opportunities for collaboration in group projects, participation in threaded discussions, and dedicated spaces for students to freely express their views all assist students in taking ownership of their learning and pursuing their learning interests as those interests materialize in—and overlap with—the course content (Vlachopoulos & Makri, 2019).  These are the activities that directly impact student engagement in a course, as well as the likelihood that a student will have a positive attitude towards the learning experience.

For many traditionally-aged students navigating undergraduate studies during the pandemic, the decreased ability to connect socially with peers, faculty, and support staff has had a direct, negative impact on their academic motivation and overall sense of wellbeing (Burke, 2020).  Thus, creating time and space in the digital learning environment for social interaction, open communication, and for students to gain a sense of identity within the virtual classroom is perhaps more important than ever. 

Finally, it’s very much worth mentioning that the extent to which all spheres of life have been impacted by COVID-19—not just the classroom—is unprecedented.  Helping students think of remote learning as an opportunity for growth, one that will have challenges and limitations as well as potential and new kinds of goals that can be achieved, can help them maintain a sense of purpose and direction amidst the chaos (Burke, 2020).  Growth mindset has already been proven to positively impact student learning at all levels—what better time to remind students (and educators) of the opportunities for growth in the present.

References:

Burke, L. (2020, October 27). Moving into the long term. Inside Higher Ed. https://www.insidehighered.com/digital-learning/article/2020/10/27/long-term-online-learning-pandemic-may-impact-students-well

Chen, B. & Chiou, H. (2014). Learning style, sense of community, and learning effectiveness in hybrid learning environment. Interactive Learning Environments, 22(4), 485-496. https://www.tandfonline.com/doi/abs/10.1080/10494820.2012.680971

Dziuban, C., Hartman, J., Moskal, P., Sorg, S., & Truman, B. (2004). Three ALN modalities: an institutional perspective. In J. R. Bourne, & J. C. Moore (Eds.), Elements of quality online education: Into the mainstream (127–148). Sloan Consortium.

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Department of Education, Office of Planning, Evaluation and Policy Development. https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Park, E., Martin, F., & Lambert, R. (2019). Examining predictive factors for student success in a hybrid learning course. The Quarterly Review of Distance Education 20(2), 11-27.

Peters, M. & Romero, M. (2019) Lifelong learning ecologies in online higher education: Students’ engagement in the continuum between formal and informal learning. British Journal of Educational Technology, 50(4), 1729.

Vlachopoulos, D., & Makri, A. (2019). Online communication and interaction in distance higher education: A framework study of good practice. International Review of Education, 65,605–632. https://doi.org/10.1007/s11159-019-09792-3

Weigel, Van B. (2002) Deep learning for a digital age.  San Francisco, CA: Jossey-Bass.

Wilen, T. (2009). .Edu: Technology and learning environments in higher education. Peter Lang Publishing.

Bias in Higher Ed Admissions: Is New Tech Helping or Hurting?

It’s fairly well known that higher education admissions practices have made headlines in recent years, and issues of access and equity have been at the heart of the controversies. In 2019, a highly-publicized admissions scandal known as Operation Varsity Blues revealed conspiracies committed by more than 30 affluent parents, many in the entertainment industry, offering bribes to influence undergraduate admissions decisions at elite California universities.  The scandal was not limited to misguided actions of wealthy, overzealous parents, however, and it included investigations into the coaches and higher education admissions officials who were complicit (Greenspan, 2019). 

Harvard University has also seen its fair share of scandals including a bribery scheme of its own and controversy over racial bias in the admissions process.  In 2019, a group of Harvard students organizing under the title “Students for Fair Admissions” went to court over several core claims:

  1. That Harvard had intentionally discriminated against Asian-Americans
  2. That Harvard had used race as a predominant factor in admissions decisions
  3. That Harvard had used racial balancing and considered the race of applicants without first exhausting race-neutral alternatives.
Demonstrators hold signs in front of a courthouse in Boston, Massachusetts in October 2018, Xinhua/Barcroft Images

In line with the tenants of affirmative action, the court eventually ruled that Harvard could continue considering race in its admissions process in pursuit of a diverse class, and that race had never (illegally) been used to “punish” an Asian-American student in the review process (Hassan, 2019).  Yet regardless of the ruling, Harvard was forced to look long and hard at its admissions processes and to meaningfully consider where implicit bias might be negatively affecting admissions decisions.

Another area of bias that has been identified in the college admissions system nationwide is the use of standardized tests, especially the SAT or ACT for undergraduate admissions and the GRE or GMAT for graduate admissions.  Changes in demand for these tests have only accelerated during the pandemic with many colleges and universities making SAT/ACT or GRE/GMAT scores optional for admission in 2020-2021 (Koenig, 2020).  Research has oft-revealed how racial bias affects test design, assessment, and performance on these standardized exams, thus bringing biased data into the admissions process to begin with (Choi, 2020). 

That said, admissions portfolios without standardized test scores have one less “objective” data point to consider in the admissions process, putting more weight on other more subjective pieces of an application (essays, recommendations, interviews, etc.). Most university admissions processes in the U.S.—both undergraduate and graduate—are human-centered and involve a “holistic review” of application materials (Alvero et al, 2020).  A study by Alvero et al exploring bias in admissions essay reviews found that applicant demographic characteristics (namely gender and household income) were inferred by reviewers with a high level accuracy, opening the door for biased conclusions drawn from the essay within a holistic review system (Alvero et al, 2020).

So the question remains—how do higher education institutions (HEIs) implement equitable, bias-free, admissions processes that guarantee access to all qualified students and prioritize diverse student bodies?  To assist in this worthwhile quest for equity, many HEIs are turning to algorithms and AI to see what they have to offer. 

Lending a Helping Hand

Without the wide recruiting net and public funding that large State institutions enjoy, the search for equitable recruiting/admissions practices and diverse classes may be hardest for small universities (Mintz, 2020). Taylor University—a small, private liberal arts university in Indiana—has turned to the Salesforce Education Cloud (and the AI and algorithmic tools within) for assistance in many aspects of the admissions and recruiting process.  The Education Cloud and other similar platforms “…use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes” (Koenig, 2020). 

As a smaller university with limited resources, the Education Cloud helps Taylor’s admissions officers zero-in on the type of applicants they feel are most likely to enroll, and then identify target populations that exhibit similar data sets in other areas of the country based on that data.  Taylor can then strategically and economically make recruiting efforts where they’re—statistically speaking—likely to get the most interest.  With fall 2015 boasting their largest Freshman class ever, Taylor is, in many ways, a success story, and Taylor now uses Education Cloud data services to predict student success outcomes and make decisions about distributing financial aid and scholarships (Pangburn, 2019).

Understandably, admissions officials want to admit students who have the highest likelihood of “succeeding” (i.e. persisting through to graduation).  Noting that the Salesforce AI predictive tools somehow account for bias that may exist in raw data reporting (like “name coding” or zip code bias), companies with products similar to the Education Cloud market fairer, more objective, more scientific ways to predict student success (Koenig, 2020).  As a result, HEIs like Taylor are confidently using these kinds of tools in the admissions process to help counteract biases that grow “situationally” and often unexpectedly from how admissions officers review applicants, including an inconsistent number of reviewers, reviewer exhaustion, personality preferences, etc. (Pangburn, 2019).   Additionally, AI assists with more consistent and comprehensive “background” checks for student data reported on an application (e.g. confirming whether or not a student was really an athlete) (Pangburn, 2019). Findings from the Alvero et al (2020) study mentioned earlier suggested that AI use and data auditing might be useful in informing the review process by checking potential bias in human or computational readings.

Another interesting proposal for the use of tech in the admissions process is the gamification of data points.  Companies like KnackApp are marketing recruitment tools that would have applicants play a game for 10 minutes.  Behind the scenes, algorithms allegedly gather information about users’ “microbehaviors,” such as the types of mistakes they make, whether those mistakes are repeated, the extent to which the player takes experimental paths, how the player is processing information, and the player’s overall potential for learning (Koenig, 2020). The CEO of KnackApp, Guy Halftek, claims that colleges outside the U.S. already use KnackApp in student advising, and the hope is that U.S. colleges will begin using the platform in the admissions process to create gamified assessments that would provide additional data points and measurements for desirable traits that might not otherwise be found in standardized test scores, GPA, or an entrance essay (Koenig, 2020).

Sample screenshot of a KnackApp game, apkpure.com

Regardless of its specific function in the overall process, AI and algorithms are being pitched as a way to make the admissions system more equitable by identifying authentic data points and helping schools reduce unseen human biases that can impact admissions decisions while simultaneously making bias pitfalls more explicit.

What’s The Catch?

Without denying the ways in which technology has offered significant assistance to—and perhaps progress in—the world of HEI admissions, it’s wise to think critically about the function of AI and algorithms and whether or not they are in fact assisting in a quest for equity.

To begin with, there is a persistent concern among digital ethicists that AI and algorithms simply mask and extend preexisting prejudice (Koenig, 2020).  It is dangerous to assume that technology is inherently objective or neutral, since technology is still created or designed by a human with implicit (or explicit) bias (Benjamin, 2019).  As Ruha Benjamin states in the 2019 publication Race After Technology: Abolitionist Tools for the New Jim Code, “…coded inequity makes it easier and faster to produce racist outcomes.” (p. 12)

Some areas of concern with using AI and algorithms in college admissions include:

  1. Large software companies like Salesforce seem to avoid admitting that bias could ever be an underlying issue, and instead seem to market that they’ve “solved” the bias issue (Pangburn, 2019).
  2. Predictive concerns: if future decisions are made on past data, a feedback loop of replicated bias might ensue (Pangburn, 2019).
  3. If, based on data, universities strategically market only to desirable candidates, they’ll likely pay more visits and make more marketing efforts to students in affluent areas and those who are likely to yield more tuition revenue (Pangburn, 2019).
  4. When it comes to “data-based” decision-making, it’s easier to get data for white, upper-middle-class suburban kids, and models (for recruiting goals, student success, and graduation outcomes) end up being built on easier data (Koenig, 2020).
  5. Opportunities for profit maximization are often rebranded as bias minimization, regardless of the extent to which that is accurate (Benjamin, 2019)
  6. Data privacy… (Koenig, 2020)

Finally, there’s always the question of human abilities and “soft skills,” and to what extent those should be modified or replaced by AI in any professional field.  There’s no denying the limitations AI and algorithms face in making appropriate contextual considerations.  For example, how does AI account for a high school or for-profit college that historically participates in grade inflation?  How does AI account for additional challenges faced by a lower income or first-generation student? (Pangburn, 2019)  There are also no guarantees that applicants won’t figure out how to “game” data-based admissions systems down the road by strategically optimizing their own data, and if/when that happens, you can bet that the most educated, wealthiest, highest-resourced students and families will be the ones optimizing that data, therefore replicating a system of bias and inequity that already exists (Pangburn, 2019).

As an admissions official at a small, liberal arts institution, I am well aware of the challenges presented to recruitment and admissions processes in the present and future, and am heartened to consider the possibilities that AI and algorithms might bring to the table, especially regarding efforts towards equitable admissions practices and recruiting more diverse student bodies.  However, echoing the sentiments of Ruha Benjamin in The New Jim Code, I do not believe that technology is inherently neutral, and I do not believe that the use of AI or algorithms are comprehensive solutions for admissions bias.  Higher education officials must proceed carefully, thoughtfully, and with the appropriate amount of skepticism.

References:

Alvero, A.J., Arthurs, N., Antonio, A., Domingue, B., Gebre-Medhin, B., Gieble, S., & Stevens, M. (2020). AI and holistic review: Informing human reading in college admissions from the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 200–206. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375871

Benjamin, R. (2019).  Race after technology: Abolitionist tools for the New Jim Code. Polity.

Choi, Y.W. (2020, March 31). How to address racial bias in standardized testing. Next Gen Learning. https://www.nextgenlearning.org/articles/racial-bias-standardized-testing

Greenspan, R. (2019, May 15). Lori Loughlin and Felicity Huffman’s college admissions scandal remains ongoing. Here are the latest developments. Time. https://time.com/5549921/college-admissions-bribery-scandal/

Hassan, A. (2019, November 5). 5 takeaways from the Harvard admissions ruling. The New York Times. https://www.nytimes.com/2019/10/02/us/takeaways-harvard-ruling-admissions.html

Koenig, R. (2020, July 10). As colleges move away from the SAT, will algorithms step in? EdSurge. https://www.edsurge.com/news/2020-07-10-as-colleges-move-away-from-the-sat-will-admissions-algorithms-step-in

Mintz, S. (2020, July 13). Equity in college admissions. Inside Higher Ed. https://www.insidehighered.com/blogs/higher-ed-gamma/equity-college-admissions-0

Pangburn, D. (2019, May 17). Schools are using software to help pick who gets in. What could go wrong? Fast Company. https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong

An Ethic of Authenticity in Digital Media & Communications

As someone who is currently working in a recruiting/marketing role in higher education, I am consistently involved in the media representation and strategic communications produced on behalf of the programs and organization I work with/for.  I also have to make choices about how to engage in interactive technological communications (e-mails, video conferencing) and represent myself in digitally-mediated relationships as I recruit and advise prospective students. As has been the case for so many others, COVID-19 has only increased the amount of work and communication I do in the digital space.

An ethical value that is very important to me in my professional context and as a digital citizen advocate is authenticity and transparency, especially as it relates to media representation and digitally-mediated relationships.  Higher education institutions often adopt and implement new information and communication technologies (ICTs) frequently, but little time is given for critical reflection on how they are being used (or how they ought to be used) by the students, faculty, and staff who engage with them (Paulus et al., 2019). So, in an effort to think critically, I’d like to explore what authenticity and transparency look like in media representation and online presence for a large organizational and, more specifically, for an individual representative of that organization.

Digital media and ICTs complicate meaningful pursuits of authenticity from a public relations standpoint; these technology-mediated realms of human interaction challenge what we see as authentic and make it harder to tell the difference between what is “real” and what is “fake” (Molleda, 2010).  One need look no further than “reality” TV, social media personas, and journalistic integrity in the era of “fake news” to understand that not all claims of authenticity in media are substantive.

Additionally, an exchange of information—and by extension the authenticity of that information—is just that: an exchange.  The presentation of information, authentic or otherwise, isn’t unidirectional; all actors have the potential to influence one another and the flow of information (Chin-Fook & Simmonds, 2011). Thus, the power behind authenticity claims does not rest solely with the creator or presenter of the content, but also with those who will be interpreting and negotiating meaning from it.  That which is considered authentic by stakeholders will be socially and culturally influenced (Molleda, 2010).

But perhaps we must first further explore what the ethic of authenticity is before attempting to examine what it looks like in practice in digital spaces, especially from a marketing and communications lens.  In a comprehensive literature review, Molleda (2010) found that several pervasive themes, definitions, and perceptions of authenticity consistently surfaced across a variety of disciplines.  Taken as a whole, Molleda (2010) asserts that these claims may be used to “index” or measure authenticity to the extent that they are present in any given communication or media representation.  Some of these key aspects of authenticity include:

  1. Being “true to self” and stated core values
  2. Maintaining the essence of the original (form, idea, design, product, service, etc.)
  3. Living up to others’ expectations and needs (e.g. delivering on promises)
  4. Being original and thoughtfully created vs. mass produced
  5. Existing beyond profit-making or corporate/organizational gains
  6. Deriving from true human experience

I have italicized the last of these markers of authenticity because it seems the most comprehensive—and perhaps the most important—marker in a digital space. Granted, certain aspects of human experience are not easily replicated in online media, including a grounded sense of time/place and sensory cues like smell, physical touch, or visual elements outside of a screen that would otherwise add context (McGregor, 2013). Consider the idyllic social media picture that fails to incorporate the “mess” of real life that we might otherwise see just outside the frame.  What can be conveyed in digital media and communications, however, are human stories and that which directly flows out of those experiences.  As Molleda (2010) says, “The search for, and identification of, real stories and people within organizations…is part of the critical job that [public relations] practitioners must perform” (p. 224).

In my context, that means that communication/marketing efforts grounded in student narrative and in my own relevant experiences will be organically authentic. Photographs, testimonials, and statistics derived from current student cohorts and recent graduates aren’t just helpful tools for marketing, they actually carry ethical weight.  They will also likely be the most effective way to engage future students in a sincere way that is also received as authentic.

Additionally, a good test for authenticity is whether or not I—as an individual representative of my organization—am willing to “openly, publicly and personally be identified as the persuader in a particular circumstance” (Baker & Martinson, 2002, p.17).  In other words, in my sphere of influence with media representation and communications, it’s important to stop and ask myself: am I willing to be personally associated with this content (Molleda, 2010)? There’s no doubt that in the digital space there are often blurred lines between personal and professional online identities, so it’s worthwhile to consider that professional actions and communications online might easily have personal consequences (and vice versa), both now and in the future.  

Finally, and perhaps anecdotally, if we assume that true human experience lies at the heart of authenticity in the digital realm, there must be room for “customers” to decide whether or not the offerings (in this case, graduate programs) are indeed best fit for their needs. This goes against the nature of higher educational institutions that are perpetually competing for student tuition dollars, but if practicing authenticity includes a willingness to look beyond profit-making and organizational gains, then media representation and student interactions will allow students the space to decide what’s best for them, even if that means going somewhere else.  If it becomes clear that the “product” I’m associated with can’t appropriately serve the aspirations and expectations of the interested party, then an ethic of authenticity would demand that I communicate as much to the prospect.  This also requires attention to any “lie of omission” that might exist wherever a transfer of information takes place, a crime that is much easier to commit in a digital space.  An ethic of authenticity goes beyond any kind of plan or strategic marketing campaign; authenticity is about presenting the essence of what already exists and whether (or not) it has the ability to live up to its own and others’ expectations and needs (Molleda, 2010).

Molleda (2010) concludes that consistency between “the genuine nature of organizational offerings and their communication is crucial to overcome the eroding confidence in major social institutions” (p. 233). I for one hope to continue embodying an ethic of authenticity in my professional work—in digital spaces and otherwise—in  order to set the stage for that consistency and to bolster societal confidence in at least one higher education institution.

References

Baker, S. & Martinson, D.L. (2002). Out of the red-light district: five principles for ethically proactive public relations. Public Relations Quarterly, (47)3, 15-19.

Chin-Fook, L. & Simmonds, H. (2011). Redefining gatekeeping theory for a digital age. McMaster Journal of Communications, 8, 7-34. https://journals.mcmaster.ca/mjc/article/view/259

Molleda, J. (2010). Authenticity and the construct’s dimensions in public relations and communication research. Journal of Communication Management,14(3), 223-236. http://dx.doi.org.ezproxy.spu.edu/10.1108/13632541011064508 

McGregor, K.M. (2013). Defining the ‘authentic’: Identity, self-presentation and gender in Web 2.0 networked social media. [Doctoral dissertation, University of Edinburgh]. Edinburgh Research Archive. http://hdl.handle.net/1842/16240

Paulus, M.J., Baker, B.D., & Langford, M.D. (2019). A framework for digital wisdom in higher education. Christian Scholar’s Review, 49(1), 41-61. http://works.bepress.com/michael_paulus/68/