Friday 6 April 2012

WHAT ARE SOME OF THE LIMITATIONS OR DANGERS, YOU SEE IN THE USE OF ARTIFICIAL INTELLIGENCE TECHNOLOGIES SUCH AS EXPERT SYSTEMS, VIRTUAL REALITY AND INTELLIGENCE AGENTS



EXPERT SYSTEMS:-
The disadvantages to expert systems, such as:
1-No common sense used in making decisions
2-Lack of creative responses that human experts are capable of
3-Not capable of explaining the logic and reasoning behind a decision
4-It is not easy to automate complex processes
5-There is no flexibility and ability to adapt to changing environments
6-Not able to recognize when there is no answer

VIRTUAL REALITY:-
In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.

INTELLIGENCE AGENTS:-
While there is much discussion of "intelligent web agents", the commercial examples so far are better described as "servers". The adjectives "intelligent" and "autonomous" are problematic academic terms for software that is not based on the web. The agents that are being developed for engineering applications, typed-message agents, are, indeed, largely incompatible with the web, and are very different from engineering web servers. This incompatibility follows from a criterion of agenthood that agents be able to initiate messages to one another. The web is client/server-oriented and agents require peer-to-peer communications. Another major problem is that agents require structure reflecting task-level semantics and the web is oriented around formatting structure representing only transport and display of information.
These two fundamental sources of incompatibility must be addressed for each to leverage off the other. The JAT work seems to offer hope for overcoming the protocol problem. The lack of semantic structure in HTML documents is an even larger problem but may be addressed in the future by advanced authoring tools and by programs that can read extract semantics from web documents. Relatively simple examples of both approaches exist today. However, much work remains before useful engineering agents emerge on the web.
Finally, we note that many of the agents being developed for engineering applications are of the "weak" kind in that there is no commitment to powerful reasoning by the individual agents. In fact, "dumb" legacy systems can be accommodated by the typed-message approach that only commits to an application-dependent protocol. The protocol may be derived from a "strong" theory of agents, as advocated by Haddadi [Haddadi 96], or from a theory of design as with the Next-Link agent protocol. In both cases, the result is that typed-message agent-based systems can add value to engineering systems and even integrate heterogeneous services though no individual agent might be characterized as "intelligent". In short, weak agents can be powerful as well as being well-defined.