Question: Draft educational and thoughtful responses to the posts below. POST #1 (John): How Privacy Has Shifted I greatly enjoyed this week's readings as they focus
Draft educational and thoughtful responses to the posts below.
POST #1 (John):
How Privacy Has Shifted
I greatly enjoyed this week's readings as they focus on a domain that I am very familiar with and passionate about: Information Technology (IT). The readings this week were especially impactful as they dive into how privacy has changed and evolved over the years. I greatly enjoy the discussions regarding privacy and security, especially as it relates to IT.
We live in a data driven world in which data is as valuable as other tangible goods. Sullins (2019) does a fantastic job of explaining how powerful digital information is: "The control of information is power, and in an information economy such as we find ourselves today, it may be the ultimate form of political power. We live in a world rich in data and the technology to produce, record, and store vast amounts of this data has developed rapidly." This has led to more and more of the data we produce as users being collected, stored and used in a vast amount of ways. What was most interesting to me from this, is that privacy has seemed to shift as this reliance on data has grown.
Privacy started similarly to how most of us would imagine it, in that users of these technologies should have the ability to control the access to their personal and/or sensitive data while using such technology (Sullins, 2019). Over time, it seems as if privacy has shifted away from this idea to the assumption that this data is necessary for the technologies to work. Rather than giving the users the outright ability to restrict the data collection, privacy policy and laws have focused more on ensuring companies are transparent on HOW this data will be used and HOW they will protect it from unauthorized disclosure. This is evident if you review how privacy laws have changed over the years as well, as they have shifted towards ensuring companies are transparent with how they are using user data opposed to preventing the collection in the first place (History of Privacy Timeline / safecomputing.umich.edu, n.d.).
Most privacy policies outline that by using the technology, you surrender your data to the technology BUT they'll do their due diligence to protect it from disclosure, rather than giving the user the right to decide what data can or cannot be collected.
Do you all think that this is unavoidable? Or should their be a push to restore the user's ability to fully outline their expected level of privacy? I'd love to hear your thoughts!
References:
History of Privacy timeline / safecomputing.umich.edu. (n.d.). https://safecomputing.umich.edu/protect-privacy/history-of-privacy-timeline
Sullins, J. (2019). Information technology and moral values. Stanford Encyclopedia of Philosophy.
POST #2 (Jessica):
After reading both "Information Technology and Moral Values" and Audrey Watters' "The Ed-Tech Imaginary," I keep coming back to one key idea: we need to be more critical of the stories we tell (and are told) about technology in education. Both pieces make it clear that technology is never truly neutral. It's always shaped by values, assumptions, and power.
Watters challenges the way "reimagining education" is often used to justify big changes that actually harm public education, like cutting funding or pushing privatized tech solutions. That really made me think about how often we celebrate "innovation" without questioning who benefits. The Stanford article builds on that by outlining how moral values like privacy, justice, and autonomy are directly impacted by tech decisions. For example, student data collection may improve personalization, but what happens to that data long-term? Who owns it, and how is it used?
As far as using tech to close learning gaps, yes, there's potential. But like Watters points out, we have to be careful not to buy into overly simplified solutions. Just because a tool claims to help doesn't mean it's designed with students' best interests in mind, especially when it comes to historically marginalized groups. This reminded me of Ruha Benjamin's idea of the "New Jim Code," where tech can reinforce racial bias even while claiming to be objective. If you're curious, here's a great intro video: https://youtu.be/Cxbr_6gJZfI.
Honestly, this week's readings made me reflect on how I've sometimes jumped on ed-tech tools without thinking about the bigger picture. Moving forward, I want to be more intentionalnot just about what tools I use, but why I'm using them, and who might be impacted in ways I hadn't considered.
References:
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
Stanford Encyclopedia of Philosophy. (2020, August 20). Information technology and moral values. https://plato.stanford.edu/entries/it-moral-values/Links to an external site.
Watters, A. (2020, June 21). The Ed-Tech Imaginary. Hack Education. http://hackeducation.com/2020/06/21/imaginaryLinks to an external site.
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
