Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Reply to comment

ssv's picture

Virus on a Network vs. Langton's Ant

I looked at the Virus on a Network model.  The model cycles through an infection of a few computers on a network, to a somewhat mass infection, to then virus-resistant computers and finally some susceptible computers are left afterwards, but the virus is gone (from my scope of the model).  I assume the virus was dealt with on the resistant computers (virus scanned & quarantined).  The model runs as follows:  Every infected computer tries to infect its neighbors.  All of the susceptible computers have a varied chance of receiving the virus controlled by the user.  It has been noted in the model description (provided by NetLogo) that infected nodes don’t always notice when they are infected.  The description analogizes that with the possibility that the virus software hasn’t made its (perhaps) weekly check yet.  If found, the computer also has a “recovery chance”.  


I’ve tried a few variants of running the model.  Even when running up the number of computers that have the virus initially and the spread chance raised, eventually with time, all of the computers get the virus removed and even more computers are resistant compared to the results of other trials.  This model when compared to Langton’s Ant model share some of the same characteristics:  simple interactions yield great results (i.e. Langton--the ant has 4 required instructions to obey →(which yields) complex results and that one almost “constant” line, …Virus--the computers are infected-humans/computers eventually check for viruses-the process of being fixed occurs-then they proceed to be resistant).  The difference with both of the models consists of the user being able to draw/erase roadblocks on the Ant model for it to work around.  I consider this to be different than the virus model because I can’t pause the virus model and construct a new virus; however, I also think that the draw/erase roadblock could also possibly be compared to one of the variables in the virus model.  I think the appealing feature in the Ant model is the roadblock feature because it introduces human input into the program.  It’s interesting to see how much we can “throw it off”, meaning delay its processes.  The virus model is appealing to me as a whole as a computer science major.  Visualizing networks and the devices that interact with them fascinates me, including how when making virus conditions particularly horrible, the network still sustains itself over time.  


Given from our two ideas of how we’re exploring emergence, the Ant model relates to those ideas because even with its simple interactions, it at first surprised us when it made a line after finding a pattern and when broken down isn’t really “complicated” at all.  Both models are simple and surprising because initially it seems as both models wouldn’t “solve themselves” (i.e. the ant wouldn’t have made that line pattern and the computers on the network would still have a copy of the virus somewhere), but according to some trials of both of these models, saying they won’t do what they intend to do (follow the instructions) is false.  I’m not sure what conclusions to pull because I think there could be a bigger picture that I’m missing, but both models are fairly simple in execution but yield “big” results.  We could perhaps inquire that looking at these models (comparing data/analyzing) on a larger scale (for instance the size of a university’s computer network) could yield almost the same results, just with a different size of the model and time spent executing its directive.

Reply

To prevent automated spam submissions leave this field empty.
6 + 13 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.