Hosted on bitcointuesdaymadrid.com via the Hypermedia Protocol.
(I've updated my comment's text to correct for spelling mistakes - see first published draft here or linked archive.org for original content)
The position that they're coming from is based on a mistrust of the motives of institutions with power. Similarly informative, the claim of 'luddite' should be more thoroughly examined. The Luddites reacted to technology that put them out of work. They didn't go after machines that didn't compete with their ability to make a living for themselves. It was an act of desperation against a system in which their skills had been rendered obsolete, thus also their use as persons. These types of concerns have the acute attention of the anti-globalization activists, who are all too aware of the increasing trend towards heightening inequality.
It has been said, and is evident throughout policy, that the ideology of capitalistic America is that human beings have value exactly insofar as their ability to contribute towards the accumulation of capital. Now with this framework extended to a future where the ruling class has the wealth of MNT at its fingertips, the majority of the population is extraneous to the privileged's power. This is the type of intentional misuse of MNT that we are all concerned about, but coming from (to these luddies) the most obvious source. These are legitimate fears and can only be assuaged by a strong democracy, which this crowd sees as something a long way off. We here and those in the article mentioned have the same interests, and those are ones of survival and freedom.
IMHO there will never be enough cross talk between these two groups.



One would expect advanced AI to at first be probably in, perhaps, a University (or elsewhere, I don't know who's leading research at this point, and definitely not who will in the future) setting, then, depending on proven ability, moved into a corporate environments probably as consultants, later as decision makers. Now, it's obvious that these things should run under basic guidelines, starting with 'thou shall not kill' and so forth through whatever other ethics that are deemed necessary. Of course the goal that this implementation of AI would have to be to maximize market share and profits (for shareholders), apparently a legal responsibility of corporate directors.
It would then be inevitable that AI would be used in more general pro-business organizations, Council on Foreign Relations, for instance. Believe it or not, there are committees which promote the general interest of business, in particular, transnationals. The policies made through lobbying, proposals, et cetera and applauded by these institutions are typically seen as ones good for profit making, bad for people (except the beneficiaries of profits).
There is no reason to expect that these bodies, through their AI entities, won't collaborate with those (AI) operating in rather powerful positions in corporations. The public relations industry will obviously be one of these. PR hot shots are rather straightforward, when talking amongst themselves, about what they do. Incidentally, something which they will never believe to lose its purpose. Well, maybe not for quite a while.
Advanced nanotech implies a world where sales of products wouldn't bring anything to those who've traditionally profited from these transactions that they couldn't manufacture themselves. There are perhaps two exceptions, paid occupation of creative energy used toward specific ends (this supposedly threatened by AI) and the power suppliers wield over those who rely on them, their strength guarded by their intellectual property rights.
A 'corporate' AI may reason that profit is only worth accumulating in a time of relative scarcity (i.e. before advanced MNT) and after which it's time to seal their future.
Nowadays most people only control their own labor, renting themselves is their only way to survive. They will eventually become superfluous, thus so will the need to control (PR) them, if they don't exist. Their freedom could only threaten the power held through the institutions which will be employing less and less of them.

There already exists a category of artificial 'self-aware' entities which aggressively act in their own self-preservation. In their hands AI will simply be another tool to reinforce their self-preservation, ultimately at a high cost to others. In my view, this is worthy of concern.

[post] Anti-Technology Movement Targets Nanotechnology
from the The-gods-themselves-contend-in-vain dept.
An article on an emerging global anti-technology movement appears on the web site of Reason Magazine ("Rebels Against the Future: Witnessing the birth of the global anti-technology movement," 28 February 2001). Reason Science Correspondent Ronald Bailey reports on the International Forum on Globalization's "Teach-In on Technology and Globalization," held in New York City in late February.
According to Bailey, "If it's new, they hate it. What they fear and loathe most is biotechnology, but now some are beginning to train their sights on nanotechnology as well."
After detailing the presentations of what he describes as "an all-star cast of technophobes and other rebels against the future, featuring proud self-declared luddites," Bailey concludes, "The hopeful future of humanity freed from disease, disability, hunger, ignorance, poverty, and inequity depends on beating back the forces of know-nothing reaction such as those assembled at this weekend's Teach-In. The struggle for the future begins now."

This entry was posted on Monday, March 19th, 2001 at 10:14 AM and is filed under Memetics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.


mysticaloldbard Says: 
March 22nd, 2001 at 11:30 PM
[comment] These folks aren't stupid; they'll listen to reason
Once most of these people realize what hard MNT entails, especially voluntary freedom from work, they'll probably reverse their position. If most are just learning about MNT, they are likely not near solidifying their position. It's not clear how anti-MNT these folks are really (in the long run).
The position that they're coming from is based on a mistrust of the motives of institutions with power. Similarly informative, the claim of 'luddite' should be more thoroughly examined. The Luddites reacted to technology that put them out of work. They didn't go after machines that didn't compete with their ability to make a living for themselves. It was an act of desperation against a system in which their skills had been rendered obsolete, thus also their use as persons. These types of concerns have the acute attention of the anti-globalization activists, who are all too aware of the increasing trend towards heightening inequality.
It has been said, and is evident throughout policy, that the ideology of capitalistic America is that human beings have value exactly insofar as their ability to contribute towards the accumulation of capital. Now with this framework extended to a future where the ruling class has the wealth of MNT at its fingertips, the majority of the population is extraneous to the privileged's power. This is the type of intentional misuse of MNT that we are all concerned about, but coming from (to these luddies) the most obvious source. These are legitimate fears and can only be assuaged by a strong democracy, which this crowd sees as something a long way off. We here and those in the article mentioned have the same interests, and those are ones of survival and freedom.
IMHO there will never be enough cross talk between these two groups.
First thought, wouldn't 'human destruction via machine intelligence' essentially be human destruction via consequences of human tool building? Which is to say that, as redbird alludes to, adverse behavior from an AI entity would spring from the components of one or more of the following; the development of it's architecture, the information used to train it (to form it's worldview), and/or the problems (&context) it's given to work out. Obviously since humans will be providing all of these, they would be responsible for ideas that would cause an AI entity to act as if it would be in its self interest to destroy others. One could imagine systems in which AI would be trained that would lead to aggression as the method for gaining security, namely systems built around a zero sum view of existence where survival is achieved through domination. Then again there are environments, like in families, where sharing and solidarity are the route to security. Of course all families don't live outside the influence of the first type of system listed, but it's the typical relation between family members that I'm emphasizing.
One would expect advanced AI to at first be probably in, perhaps, a University (or elsewhere, I don't know who's leading research at this point, and definitely not who will in the future) setting, then, depending on proven ability, moved into a corporate environments probably as consultants, later as decision makers. Now, it's obvious that these things should run under basic guidelines, starting with 'thou shall not kill' and so forth through whatever other ethics that are deemed necessary. Of course the goal that this implementation of AI would have to be to maximize market share and profits (for shareholders), apparently a legal responsibility of corporate directors.
It would then be inevitable that AI would be used in more general pro-business organizations, Council on Foreign Relations, for instance. Believe it or not, there are committees which promote the general interest of business, in particular, transnationals. The policies made through lobbying, proposals, et cetera and applauded by these institutions are typically seen as ones good for profit making, bad for people (except the beneficiaries of profits).
There is no reason to expect that these bodies, through their AI entities, won't collaborate with those (AI) operating in rather powerful positions in corporations. The public relations industry will obviously be one of these. PR hot shots are rather straightforward, when talking amongst themselves, about what they do. Incidentally, something which they will never believe to lose its purpose. Well, maybe not for quite a while.
Advanced nanotech implies a world where sales of products wouldn't bring anything to those who've traditionally profited from these transactions that they couldn't manufacture themselves. There are perhaps two exceptions, paid occupation of creative energy used toward specific ends (this supposedly threatened by AI) and the power suppliers wield over those who rely on them, their strength guarded by their intellectual property rights.
A 'corporate' AI may reason that profit is only worth accumulating in a time of relative scarcity (i.e. before advanced MNT) and after which it's time to seal their future.
Nowadays most people only control their own labor, renting themselves is their only way to survive. They will eventually become superfluous, thus so will the need to control (PR) them, if they don't exist. Their freedom could only threaten the power held through the institutions which will be employing less and less of them.


There already exists a category of artificial 'self-aware' entities which aggressively act in their own self-preservation. In their hands AI will simply be another tool to reinforce their self-preservation, ultimately at a high cost to others. In my view, this is worthy of concern.
[post] Impending Doom or maybe not?
from the thoughts-on-AI dept.
An Anonymous Coward writes "Recently I have been reading a bit about Kurzweil and Bill Joy's rants about the impending destruction of life-as-we-know-it.
"I'd like to attempt to discount the likelihood of human destruction via machine intelligence by trying to figure out what would/could happen."
Read more for the rest . . . "First: Let's define intelligence in it's simplest form: The ability to solve problems given a set of inputs and some rules. Note that this does not include self-awareness (this is important) NOR does it include self-preservation (more important).
Let's assume that the first intelligent created entity has the ability to solve problems but is not self-aware. If we make it more intelligent (i.e. it can solve problems quicker and perhaps solve large problems) will that make it a threat to us, or even out of our control? I think not.
Let's examine an entity which has self-awareness next. If it has self-awareness and doesn't have self-preservation this would not preclude the possibility of the entity augmenting itself to become more intelligent (faster, more competent) or creating such an entity if ordered to do so. This entity is also not likely to be a threat since it does not care if it is switched on or off and is unlikely to do significant damage to its surroundings unless it has been ordered to do so, in which case it is simply a case of Saddam with an atom bomb.
The last type of entity is a problem-solver with self-awareness and self-preservation. Self-preservation makes this entity dangerous because it will rapidly realize that there are people who want to destroy it because they fear it. The entity is likely to want to eliminate these people. Note, however, that the entity is unlikely to want to create an entity more powerful than itself for the same reasons, but conversely will want to augment itself to make itself more powerful. This type of entity is dangerous but how dangerous is it?
I would hazard a guess and say: "Not very". Why? Because the entity still has to work through humans to get what it wants. If it wants to experiment etc (in the real world as opposed to running perhaps flawed simulations) so that it can advance technology, it has to have some way of influencing the material world. Now bear in mind that an AI of this type will probably be the property of some corporation. They will not simply just cooperate with it's every desire, but rather will give it what it needs in order to fulfill their desires. What it will come down to in the end is an uneasy truce between the AIs and the humans because the AIs probably won't be easily able to augment themselves, and even if they could, they would have to work through humans, and those that can't will be strictly bound by humans.
The only way I can see that a dystopian future such as Bill Joy's or Kurzweil's is if the AIs can control impressive technology. I personally don't think anyone is that stupid.
There's an additional reason that AIs are not a huge threat and that is humans will probably prefer to augment themselves if possible and thus AI could conceivably not develop too far. This could lead, however to the situation in which the augmented humans are so powerful that they might decide to do away with the rest of us. Picture a super intelligent Saddam Hussein. But bear in mind that even if this were to come to pass (human intelligence augmentation) it will suffer from the same problem as the AIs: In order to get resources, you have to work through the system."


This entry was posted on Monday, May 7th, 2001 at 5:29 AM and is filed under Machine Intelligence, Opinion. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.



mysticaloldbard Says: 
May 9th, 2001 at 12:41 AM
[comment] a direction of concern…
Activity