Artificial Intelligence, Communication and Big Data for Information War
Brig (Dr) Ashok Pathak
The Issue

The combination of two words information and war into a phrase Information War (IW) does not convey much if taken in isolation. The concepts and theories that have been built around it bring out the idea, components and strategies for conducting activities of IW by utilizing the modern Information Communication Technologies (ICT). A closer look at these theories reveals that these are not the ICTs themselves but the synergistic use of these technologies in a given context for specific goals set by people with corresponding and appropriate cognitive abilities that make these theories relevant.

The complete value chain of IW thus involves setting very clear and unambiguous goals, identifying and collecting relevant, accurate data, converting the data into information which can be understood in the given context, transforming the information into intelligence for creating alternative courses, selecting the best course, implementing the course of action with appropriate control so that the implementation is aligned to the goals. Thereafter getting the feedback for course correction for repeat implementation or modification of the course of action. This feedback is based on the analysis of the outcomes.

The convergence of three major technologies, Artificial Intelligence (AI), Communication and Big Data, helps in activating the above value chain and sustaining it in the most optimal manner aligned to the goals irrespective of the changing environment. While using these technologies it is very pertinent to understand that the efficacy of operation still depends on the cognitive abilities of the team assigned for the achieving the given goal.

We will discuss the following in this article:-

  • Brief and very elementary description of AI, Communication and Big Data technologies.
  • Their possible use in IW.
  • The resultant payoffs on using these technologies.
Brief Academic Description of the Technologies

The emerging technologies in AI, Communication and Big Data are widely discussed and rarely understood. AI and big data are considered to be answers to all human and machine problems. There is a widespread belief that AI can replace humans and big data can automatically collect, analyze any amount data from anywhere and forecast any event. On the other hand, communication technologies are considered to be all hardware. Hence there is need to realize that these technologies like any other asset have strengths and weaknesses. Also, technologies have never been able to replace humans. It is only the jobs or the activities that get replaced. From jobs which required excessive hard work and drudgery human beings moved up to more intelligent and inspiring jobs. Hence it will be prudent to discuss the brief academic aspects of these technologies.

AI

In a very generic term AI is defined as systems that think like humans or machines with mind. The second aspect is systems that act like human. This means that these systems perform functions that require intelligence when performed by people. Thus, AI has two key aspects, first thinking rationally, and second, acting rationally.

The above definition may give some exaggerated expectation from AI. In real world applications these are computer models or programs that are written in such a way that with every use the program learns on its own to perform better and faster in the next iteration. It works on specified problem under well-defined constraints or ecosystem. It is the job of the programmer to define the goals or objectives. Also, the human masters define the eco system that the AI program will encounter. If we are writing AI model for auto driven car then safety distances in relation to speed, type of traffic signals, preference to the pedestrians, rules for overtaking need to be well defined in the language that systems of the car understand. As the car goes though the runs it will improve its skills pertaining to negotiating the traffic, speed limits, traffic signal recognition and reaction with time. But it will not learn repairing skills or design skills of automotive industry.

Thus AI is an inter-disciplinary field of cognitive science that brings together computer models and experimental techniques from psychology to try to construct precise and testable theories of working of the human mind. The foundation of AI is based on a variety of knowledge domain. These include:-

  • Philosophy. The relevant domains in philosophy for AI application are dualism, materialism, empiricism, induction, logical positivism, and confirmation theory.
  • Mathematics:-
  • - Algorithm.
  • Probability and Decision Theory. The probability theory deals with the relative frequency of occurring of an event when the number of observations is very large. This branch of statistics helps us in predicting an occurrence by observing a series of such occurrences that had happened in the past. From observation of trends probability of similar occurrence can be predicted. Software assisted methods can be used to workout the probabilities. If fed to the AI systems the theory can be used for practical solution to the real issues. The decision theory addresses decision making in three different situations. Firstly when the pay offs from various decision matrices are known with certainty, second when the probabilities of payoffs are known and the decision makers choses the option based on the pay off and related penalties. It also includes techniques where decision making is done in environments of complete uncertainty where the decision makers take decisions based on the optimistic, pessimistic or balanced inclinations.
  • Game Theory. To deal with an adversary. This is a decision making tool in a competitive situation, where there are two or more opposing parties with conflicting interests and where the decision depends upon one taken by the opponent. AI in game theory can help us simulating war games or actual application of forces and weapon mix.
  • Operation Research. Optimization based on objective functions in given set of constraints. The modern operation research techniques – initially developed during the Second World War, use various mathematical tools for finding out the most advantageous or optimal solution to a problem under given constraints. The techniques involved are linear programming for single objective function, goal programming for a number of competing goals, queuing theory for servicing and sequencing related issues, transportation problems for optimizing movements and so on. All these techniques can now be addressed through software tools. If the environment and desired actions can be defined along with sensors the AI techniques can also address these issues with the added advantage that the process is self-learning in the given domain and improves with usage. Also the AI assisted methods can address much more complex issues through these techniques with increasing accuracy.
  • Neuro Science. Deals with the way human brain function. As per the latest theories on the subject the brain is considered to be the seat of consciousness. This knowledge existed with the Indian philosophers since ancient time.
  • Psychology. Deals with how the humans and also animals think and act. Primarily deals with behaviorism. Also includes cognitive psychology that gives us a view of brain as an information-processing device.
  • Cybernetics. This deals with computer modeling and control theory.
  • Some of the historical uses of AI tools include the following:-

    • In game planning, the famous world champion in chess – Gary Kasparov - played chess with AI enables computer in 1997 and lost.
    • Autonomous control computer vision navigation was used for auto driving of a car in the US and during 2850 miles journey the car behaved rationally 98 percent of time.
    • During the 1991 Gulf War the Dynamic Analysis and Re-planning Tool (DART) was used control 50,000 vehicles and people at a time from the starting point, routes, destinations and conflict resolution. One such single application paid for the US Defense Advanced Research Project Agency’s (DARPA) 30 years’ investment on research in AI.
    • Robotics, generally used in surgery.
    Concept of an Agent in AI

    An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet its delegated objective. (Wooldridge). An agent takes sensory input from the environment and produces as output actions that affect it. The interaction is usually an ongoing, non-terminating one. Thus, the system consists of perception, decision and action.

    There are intelligent, collaborating and cooperating agents, which have different characteristics. A combination of three types makes smart agents that continuously learn from the environment in each of their transactions. Learning agents allow itself operate in the unknown environment and then becomes more competent as it learns from the environment in each transaction. Agents also have different levels of autonomy. No agent has complete autonomy since this type of autonomy can only be available with the humans. Good AI systems have adjustable autonomy where the control of decision-making is transferred from the agent to the person whenever certain conditions are met.

    Software agents are very extensively used in e-commerce and other business activities because in view of easy availability of data and information the decision makers have a tendency to get bogged down with what is termed as information overload. Massive flow of on line data provides fast changing impression of the business environment. A decision maker needs to focus on the essential and leave the chaff. Thus there is a need to device tools which can execute mundane, repetitive and slightly intelligence endowed tasks without involving the decision maker. The basic difference in software agent and the software commands is that agent is intended to do ‘what is implied’, the software command is intended to do ‘what it is asked to do’

    There are two primary types of software agents:-

    • Static Software Agent. Reside on the owner’s machine and perform tasks such as categorizing mails, replying to routine mails, informing the owner where critical decisions are required.

    • Mobile Software Agent. Moves in the designated environment and has the ability of executing command without reference to the owner.

    Some important functions of software agents are as under:-

    • Managing Information Overload. Filter massive data flow to retrieve relevant and manageable information.
    • Decision Support. Help knowledge worker approach in manipulating decision support.
    • Repetitive Office Activity. Repetitive transactional tasks of clerical nature.
    • Mundane Personal Activity. Booking tickets, planning tours, conferences.
    • Search and Retrieval. Examine distributed data information in e- commerce activities.
    • Domain Expert. Subject specific, legal, financial, stock brokers, priests.

    These agents have following properties:-

    • Programming. Powerful language/means to define the rules unambiguously.

    • Safety. Assurance that the agent can cause no damage.
    • Resource Usage. Financial and time, memory space.
    • Navigation. Capable to find the resource.
    • Privacy. Internal state and program should not be visible to others.
    • Communication. With the users and one another.

    General Characteristics of Software Agents are:-

    • Independent Agency. Ability to perform user defined tasks autonomous of the user.
    • Agent Learning. Mimic users’ steps while performing the task. Learn from own activities.
    • Agent Cooperation. Engage in two way complex tasks involving user and other agents.
    • Agent Reasoning Capability. In decision making capacity.
    • Rule Based. Based on user scripted controls.
    • Knowledge Based. Capability to deduce based on the knowledge base inserted by the user.
    • Learning Approach. Capability to learn from statistical history.
    • Agent Interface. Human like approach (pictorials, voice, emotions). Providing different levels of intelligence.
    Concept and Types of Environment

    The environment for an agent is in terms of tasks specified for which the agent has to find a solution. The environment can be any of the following:-

    • Fully observable and partially observable. If an agent’s sensors give it access to complete state of the environment at each point in time. Here the sensors detect all aspects that are relevant to the task.
    • Deterministic vs stochastic. If the next state of the environment is completely determined by the current state then the environment is deterministic. If the environment is partially observable then the environment is stochastic.
    • Episodic vs sequential. In episodic environment the agent’s experience is divided into atomic episodes. Each episode consists of agent perceiving and then performing the task. The next episode does not depend on the task performed in the earlier episode. In sequential environment current action may affect all other tasks in the sequence.
    • Static vs dynamic. If the environment can change while the agent is deliberating then we say that the environment is dynamic.
    • Discrete vs continuous. It deals with the state of environment the way time is handled and to the percepts and actions of the agent (e.g. taxi driving). Discrete environment has finite number of actions with respect to time (e.g. game of chess).
    Communication in AI

    An important theme in the study of communication is the ‘speech act theory’, better called, Communicative Act Theory, since it has little specific connection with spoken communication. The main insight behind the Communicative Act Theory is that communication is a form of action. Specifically, we can think of communicative acts as those where “saying makes it so.” For example, when a judge declares a couple married, the judge is not merely reporting on some privately or publicly known fact; instead, the judge is bringing the fact into existence. The same may be said for a soccer umpire who ejects a player from the game. The umpire is not merely stating that the player is not allowed on the field for the duration of the game, the umpire is causing the player’s permission to enter the field during the current game to be withdrawn. The judge and the umpire rely upon lower level means to carry out the communicative acts. The kind of stylized construction has an important ramification for us as students of multi-agent systems. It emphasizes that although what is being communicated may or may not be within the control of the communicator, the fact that the agent chooses to inform, request, or promise another agent is entirely within its control. The above construction thus coheres with our multi-agent systems thinking about autonomy and reflects the essence of the autonomous nature of communication.

    For two agents to communicate about some domain it is necessary to devise some terminology that these agents use to describe the domain. The set of terms written to provide common basis of understanding about the domain is ‘ontology’. Ontology is a formal definition of a body of knowledge which is used by agents in AI.

    In AI, communication is an action. An agent can undertake e-mailing, sky-writing and using sign language by any means so that working in a multi-agent environment two or more agents cooperate and coordinate to help arrive at a joint plan. The communication can be of any one or more of the following forms:-

  • Query. Asking about particular nature of the environment
  • Inform. Informing each other about the world.
  • Request. Ask other agents to perform the task.
  • Acknowledge. Send back OK once the agent has been requested.
  • Promise. Commit to the plan.
Big Data

Data is the start point of all studies in statistics, research and decision science. In statistics data are defined as collection of any number of observations. A collection of data is called ‘data-set’ and a single observation is called ‘data-point’. In traditional research or decision science the first step is to identify the research problem or decision model. Thereafter, the type of data required for solving the problem or prove/disprove the model is decided. This is followed by data collection in a systematic way, representing and analyzing the data and coming out with interpretation in the context of the research problem or decision model. In the entire process the data is highly structured and contextual which means that data collection is under the control of the researcher or decision maker. The data is of finite size and is available with manageable speed for the purpose of analysis.

Big Data differs from the above approach in almost all aspects. First, collection of data here is beyond the control of the researcher. It is therefore unstructured data. The sources from where the data comes are many and different in form. These can be mails, purchase records, social media posts, blogs, texts, audio, video, and so on. The speed with which the data arrives and is required to be analyzed are exceptionally high and this velocity is increasing by the day. Thus, the volume, velocity and variety of data that is to be handled in Big Data make the traditional methods of analysis of this data irrelevant. We need new technology to address this new phenomenon. However, the most revolutionary change the Big Data brings in its wake is that from model or research problem driven approach we move over to data driven approach. It is this, huge volume of data from variety of sources coming at unimaginable velocity that decide the trends, patterns, new understanding and even models.

Thus Big Data is an exceptionally large set of complex data, both structured and unstructured which is beyond the analytic capability of traditional statistical techniques and algorithms. It needs new technology for maximizing computational power and algorithm accuracy to gather, analyze, link and compare this huge data-set. It also needs new methods of analysis that identify patterns from this high volume data-set. Finally, keeping in view the quantum of data fro almost all imaginable sources, Big Data has the capability to generate insights, which were not possible earlier.

There are four distinct characteristics of Big Data:-

  • Volume. Current data existing is in petabytes, which is already problematic; it’s predicted that in the next few years it’s to increase to zettabytes (ZB). This is due to increase use of mobile devices, social networks and internet of things.
  • Velocity. Refers to both the rate at which data is captured and the rate of data flow. Increased dependability on live data cause challenges for traditional analytics as the data is too large and continuously in motion.
  • Variety. As data collected is not of a specific category or from a single source, there are numerous raw data formats, obtained from the web, texts, sensors, e-mails, etc. which are structured or unstructured. This large amount causes old traditional analytical methods to fail in managing big data.
  • Veracity. Ambiguity within data is the primary focus in this dimension – typically from noise and abnormalities within the data.
How can this convergence be used in IW?

Keeping in view the value chain of activities in IW from goal to action and outcome assessment, it is evident that speed, accuracy, synergy, flexibility with the changing environment, timely and appropriate application and continuous situational awareness are the key factors for success. These capabilities are inbuilt in AI, multi-agent Communication and Big Data. The extent and efficacy of use of these three technologies are limited by the cognitive abilities of the users. Some of the broad applications are as under:-

  • Strategic Application:-
  • Use of these techniques brings in a cultural change in the organization. Decision makers using these tools learn to think in logical, rational and scientific manner. Thus, in AI the design of ‘smart learning agents’ will crystallize the users perception of the task at hand. The sensors and actuators will further create in depth understanding of the environment and the interplay of various agencies and stakeholders involved. As the agents learn with each transactions so do the users. The organizational culture will change from rule based functioning to goal and objective based functioning.
  • Dynamic force structuring, application and outcome assessment can be done by the converged technologies discussed above.
  • Opinion identification, psychological messaging and opinion assessment and management campaigns, impact assessments and perception management of the target audience can be very effectively done by these three technologies.
  • Weapon and equipment profiling, design and development or procurement processes can be made extremely effective.
  • These can be used in strategic intelligence gathering and dissemination to the required entity in fast moving conflict zones.
  • It will reduce many layers of our multilayered bureaucracy and military hierarchy.
  • Operational Application:-
  • We need to apply the principles of supply chain logistics to our defense production units as also to the Defence Research & development Organization (DRDO) and the users (Army, Air Force and the Navy). This will need creation of a culture of cooperation, coordination to achieve competence and desired goals. Having done this would have created a massive eco-system where AI and Big Data can be very appropriately applied to leverage the economy of scales and very high productivity.
  • Enhance the efficiency of operational logistics in the three Services.
  • Help commanders in contingency planning as the combat zone situational changes with time.
Some likely Payoffs

Traditionally, the three Services surrender a substantial chunk of budgeted funds due manual and tedious processes. The inherent weaknesses of the current processes of budget preparation and expenditure monitoring are due to series of layers and manual checks at each layer by officers in different departments. It takes months to prepare the budget estimates by the three services. The figures are generally inflated while asking for funds. There are duplications and avoidable demands. At the Ministry of Defense and Ministry of Finance, arbitrary cuts are imposed. Thereafter the expenditure controls through various Competent Financial Authorities (CFAs) and Integrated Financial Advisors (IFAs) is done manually. Thus at an optimistic estimate it takes a minimum of six months to incur and book the expenditure. Added to this are the delays in payments and subsequent cost over runs in the next iteration. As per Amaya Ghosh’s book ‘Resource Allocation and Management in Defense Need for a Framework’ (see reference) ‘Keeping in view the stringent controls and checks again by the IFA system every year large amount of funds are surrendered - in 2001, Rs. 4100 crores; in 2002-03, Rs. 5000 crores; and during the period 1999-2004, Rs. 32,740.26 crores were surrendered. This unspent amount is primarily due to tedious procedure and compulsion to spend within a financial year.

If the processes involved in budgeting and expenditure planning/monitoring are handled by these converged technologies we would be saving time and around Rs. 4000 to 5000 crores annually since no funds will be surrendered and there will be no cost overruns. Add to this the in efficiencies of item wise budgeting which can be done away by program wise or capability focused budgeting. This would ensure that we save another one to two billion rupees by avoiding unnecessary projections and padding up. The process of trend analysis, revenue estimates (which are from the actuals) and capital budget projections from the e-procurement sites can all be addressed concurrently by the AI enables systems and the entire process of budget projections will take a few days rather than spreading over eight to ten months.

Our import bill on procurement of weapon systems has been the highest in the world for five years since 2013. Our state policy for the last five decades has been to keep import component below 30 percent of the total procurement. In comparison to this bench mark, our import bill has been to the tune of 70 percent. Creating supply chain management ecosystem involving all the public sector and private sector entities along with research, academic and user organizations as the contributors and stakeholders we can cut down import to less than 30 percent. Yes, we must accept that the change of this eco-system will take anything from two to three years. Considering that our import bill in 2017 was around 3.3 billion dollars, even 30 percent reduction in the import bill by ensuring the ‘Make in India’ approach and using the supply chain concepts supported by AI and Big Data techniques in design and production, there is likely to be a saving of around 2 billion dollars annually with effect from 2022 by the time we fully operationalize the AI and Big Data enabled supply chain production systems. Add to this the high caliber jobs we would create in the country.

It may be interesting to note some of the outstanding results achieved by adopting technology aided efficient process. Motorola cut down manufacturing time of product from 40 days to one hour using Six Sigma processes and using the available technologies in early nineties. GE motors saved 300 million dollars by improving their processes in 2002. The Western manufacturers of weapon systems are five times more productive than their Indian counterparts since they use the modern tools as discussed above.

Finally, it must be clearly understood that AI and Big Data converged system is as good as the humans who have created these systems. Historically, development on AI started sometimes in 1967. By 1995, it was highly developed and is being used by the Americans and the West European powers. But they still have many unsolved problems both in the military and non-military domains. The process goes on.

References:
  1. Stuart Russel Norvig Peter Artificial Intelligence A Modern Approach Pearson 9th Reprint (2003) ISBN 978-81-7758-367-0.
  2. Nilson Nils J Principles of Artificial Intelligence Tenth Reprint (2002) Narosa Publishing House ISBN 81-85198-29-2.
  3. Wooldridge Michael An Introduction to Multi Agent Systems John Wiley &Sons Ltd 2009 ISBN 9780407519462.
  4. Lauden Kenneth C Carol Guercio Traver E Commerce Pearson Education 2003 ISBN 81-297-0112-X.
  5. Rvindran, Phillips Solberg Operations Research Principles and Practice.
  6. Tanenbaum Andrew S Computer Networks Fourth Edition(2006) Pearson Education.
  7. https://www.researchgate.net/publication/291229189_Big_Data_Understanding_Big_Data Kevin Taylor Sabyi, Aston University January 2016
  8. https://www.ntnu.no/iie/fag/big/lessons/lesson2.pdf
  9. Kothari C R Research Methodology Methods and Techniques Second Edition New Age International Publishers 2009 ISBN (13) 978-81-224-1522-3.
  10. Richard I Levin, David S Rubin Statistics for Management Seventh Edition 1997 Prentice Hall India ISBN 9788120312357.
  11. Pathak Ashok “ Optimizing India’s Defense Expenditure” VIF Brief 12 December 2018.
  12. Chase Richard B, Shankar Ravi, Jacobs F Roberts Operations and Supply Chain Management McGraw Hill 2017 ISBN 13: 978-93-392-0410-5.
  13. Kubaik T M, Benbow Donald W The certified Six Sigma Black Belt Handbook Pearson 2016 ISBN 978-81-371-2869-7.

Image Source: https://www.relevance.com/impact-of-artificial-intelligence-on-digital-marketing/

Post new comment

The content of this field is kept private and will not be shown publicly.
2 + 12 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Contact Us