top of page

How Far Should Technology Go?

by Agustin Panico


Technology is the sum of human ingenuity. People make technology when there is a problem that needs to be solved. With every problem solved, humanity progresses further. Some technology was built for the sake of it. What happens when technology advances too far?


The singularity is the concept of an era where Technology develops itself at an exponentially fast rate, changing how we live forever. Manufactured machinery manufacturing more machines. AI developing themselves at an unforeseen rate. Mass production on a big scale. For companies this would be efficient. Time can be scarce compared to money, so this is the natural progression. Robots can also be considered cheaper than people in the long term, so it makes sense that they would opt for that as well.


There is a fear in the United States of people “taking” other people's jobs. I mentioned that robots would be cheaper than people, so people are afraid of machines taking their jobs as well. A big portion is afraid of immigrants taking their jobs, but they are not looking at the future. Factories were the easiest to fill with robots because an assembly line is the simplest. The conveyor moves to a machine and then the machine does a specific action. As AI develops, robots can commit to more and more complex jobs. I expect robots will take on productive jobs with less movement or communication involved first, as those are simpler and are not on a battery. As robots evolve they will take up much of the workforce.


Military tech could be a major threat to humanity as a whole. Stephen Hawking was quite aware of the threat even going as far as saying "The development of full artificial intelligence could spell the end of the human race" in 2014. As of now, in 2021, there were reported sightings of police deployed robot dogs from Boston Dynamics in New York. It is said to be in a test phase and only used for reconnaissance. In the military they are working on targeting AI and algorithms. The US has always been a glutton for military power and they no doubt would encourage more technology of this type. It may help decrease casualties on a battlefield but has dangerous implications. In the future as AI gets more advanced there will be more questionable programming as more developments will be made outside of targeting programs. In these situations it is important to know the priorities in their programming.


The science fiction writer Isaac Asimov wrote three laws that most robots in his stories would come to follow. The first (and most important) was “​A robot may not injure a human being or, through inaction, allow a human being to come to harm”. To prevent AI from attacking humans through a bug, malice, or wrong priorities they would need to be programmed to follow this. This law would need much thought put into programming. “If a human may come to harm, do this”. A robot cannot stand by when a human may be hurt, but what should the robot do? The AI would need to come up with a solution itself. Improvisation when someone may be in danger is a quite complex situation. The robot would need to be able to identify any danger. What is considered harmful? Note that the law does not specify physical harm. Are words harmful to a human being? Is a human always worth saving from harm when a lesson could be learned? I fear that AI may become a catastrophic risk before they are complex enough to comprehend this law.


These days in the pandemic people at their homes have to rely on computers to get work done. We can communicate through computers and send work through them. Some prefer this way of working and there are people who hate it. I probably would not mind it as much if I was not spending a college year doing it. Over time situations like these will push the advancement of technology. I had never heard of Zoom before the pandemic and probably would not have. If anything this predicament is another example of how resourceful we can be. Some people could work like this for the rest of their lives and would not have thought about it beforehand. I am sure in the coming decades we would be even more prepared because inventors and programmers lived through this scenario and hopefully it will benefit mankind.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page