Artificial Intelligence-assisted Terrorism: A New Era of Conflict
Nikita Vashishtha

Artificial Intelligence (AI) has witnessed remarkable advancements in recent years, revolutionising various industries and transforming societal landscapes. AI encompasses developing computer systems capable of performing tasks that traditionally require human intelligence. Breakthroughs in Machine Learning (ML), Natural Language Processing (NLP), computer vision, and robotics have propelled AI to unprecedented heights. The convergence of enhanced computational power, vast amounts of data, and sophisticated algorithms has unlocked AI’s potential to revolutionise healthcare, finance, transportation, and security.

However, AI also presents new challenges and risks alongside its transformative potential. Recent advancements in AI technology have enabled machines to perform intricate tasks and make independent decisions, offering malicious actors unprecedented opportunities to exploit AI algorithms for their nefarious purposes. This progress can potentially automate and enhance various aspects of crime, mainly terrorism, amplifying its scale, efficiency, and impact. The emergence of AI-assisted terrorism poses a significant and evolving challenge to national security. As AI technologies continue to advance, terrorist organisations are increasingly utilising these tools to enhance their capabilities, adapt their tactics, techniques, and procedures (TTPs), and propagate their ideologies. This convergence of AI and terrorism has far-reaching implications for security agencies and necessitates a proactive and comprehensive approach to counter this emerging threat.

The utilisation and possession of AI-based technologies by such violent non-State actors pose a significant and concerning threat to the stability of existing State and non-State dynamics on the modern battlefield. The proliferation of AI capabilities among non-State actors, such as insurgent groups or terrorist organisations, can disrupt the traditional balance of power and introduce new complexities to the nature of conflict. The ability of terrorist organisations to employ AI algorithms to evaluate enormous volumes of data and derive essential insights is the most critical feature of AI-assisted terrorism. These insights can help conspire and execute a terrorist attack by identifying possible targets, vulnerabilities, and security force patterns. Terrorist groups may make more accurate judgements, alter their methods in real time, and optimise their operations for maximum impact by leveraging the potential of AI.

As society increasingly gets dependable on the Internet and technologies, it is crucial to recognise that terrorist organisations are also following this shift. Terrorist organisations tend to adopt technology early, exploiting emerging tools and platforms to further their agendas. This adaptability and willingness to embrace emerging technologies have enabled terrorists to exploit various advancements, including the misuse of 3D-printed guns, cryptocurrency and even the exploitation of AI technologies. They may leverage AI-powered deepfakes and content generation techniques for propaganda dissemination, recruitment, disinformation spread, and even drone attacks. The increased accessibility of AI to non-State actors allows them to utilise its capabilities without considerable financial or technological restrictions.

Artificial Intelligence & Terrorism
Deepfakes and Misinformation

Deepfakes, an outcome of the convergence of AI and multimedia manipulation techniques, have garnered significant attention due to their profound implications. Initially gaining prominence for their entertainment value, deepfakes have empowered users to superimpose faces onto diverse characters or craft amusing videos seamlessly. However, as with any technological advancement, deepfakes also bear a darker side that raises concerns regarding their potential exploitation by criminal syndicates, including terrorist organisations. Deepfakes are primarily engendered through advanced deep learning algorithms, most notably leveraging Generative Adversarial Networks (GANs). These networks comprise a generator network and a discriminator network. While the generator network facilitates the production of manipulated content, the discriminator network critically evaluates the authenticity of the generated material. Through iterative training on extensive datasets, these algorithms acquire the ability to meticulously analyse and emulate the visual and auditory intricacies of the original content, thereby enabling the creation of highly deceptive and strikingly realistic deep fakes.

The violent non-State actors have recognised the potential of synthetic data in manipulating the information environment for their nefarious purposes, leading to an increased prevalence of such tactics by terrorist groups. In India, terror groups like The Resistance Front (TRF) and the Tehreeki-Milat-i-Islami (TMI) have already leveraged fake videos and photos to provoke specific groups, especially targeting young individuals more susceptible to manipulation.[1]

Disinformation and deception have become a powerful weapon with far-reaching implications. The digital age, particularly the proliferation of social media platforms, has made it easier than ever for malicious actors to manipulate public opinion, sow discord, and undermine trust in institutions and democratic processes. By disseminating fabricated visual content, these groups aim to incite violence, exacerbate existing tensions, and fuel radicalisation among their intended audience.

In 2022, a Ukrainian television news outlet, Ukraine 24, claimed that its live broadcast and website were hacked, and during the hack, a chyron falsely stating that Ukraine surrendered was displayed. Additionally, a deep fake video of Ukrainian President Volodymyr Zelensky appeared to be circulating online, in which he seemingly urged Ukrainians to surrender.[2] These scenarios exemplify how deepfake technology may disseminate misinformation and induce confusion during significant events like armed wars or geopolitical crises.

Moreover, with the advancements in this technology, audio deepfakes have also become a significant challenge. Terrorists and other malicious individuals can imitate the voices of legitimate people, such as government officials, business executives, or even acquaintances, using Speech Synthesis Technology, also known as Text-to-Speech (TTS). The creators can influence and deceive people with malicious intent by creating convincing audio messages that sound like the victim's voice.[3]

In radicalisation, deepfakes can be used by exploiting the vulnerabilities of their emotions and ideas. Adversaries use modified videos to manufacture important persons’ endorsements of extremist ideology, spread bogus testimonials to justify radical beliefs, and broadcast propaganda to celebrate violence and instigate acts of terror.[4] The broad dissemination of such deepfakes via social media and other channels has the potential to confuse and influence vulnerable persons towards radicalism. Extremist groups may also use deepfakes to defame opponents, undermine their credibility, and distort historical events to further their narratives. False flag operations, in which deepfakes are fraudulently attributed to opposing sides, aggravate tensions and sow strife in communities.[5]

AI-enabled Chat Platforms

AI-enabled communication platforms, mainly chat applications, have the potential to be powerful tools for terrorists aiming to radicalise and recruit individuals. Using AI algorithms, these platforms may send a tailored and customised message that caters to potential recruits’ interests and vulnerabilities. Automated and persistent engagement via AI chatbots can eventually normalise extremist ideologies and foster a sense of belonging inside extremist networks. Terrorists can use these platforms’ anonymity to conceal their identity while interacting with potential recruits. Furthermore, the multilingual capability of AI chatbots allows terrorists to reach a global audience, overcoming language hurdles and extending their potential recruitment pool. The quick and scalable reach of AI-powered platforms accelerates the distribution of extremist information, accelerating the propagation of violent beliefs.

In recent years, “Rocket.Chat” has emerged as a highly reliable online communication platform, adopted by the Islamic State (IS) in December 2018 and later by al-Qaeda.[6] Its Slack-like interface[7] facilitates seamless and encrypted conversations between jihadist groups and their supporters, enabling the dissemination of official and unofficial propaganda through privately-operated servers. The platform’s open-source architecture offers adaptability, allowing extremists to customise the system to suit their needs and security requirements. Moreover, with direct control over the servers, jihadists ensure persistent access and reduced risks of content removal or disruptions from external entities.[8]

Jonathan Hall KC, the United Kingdom’s (UK) independent reviewer of terrorism legislation, has raised concerns about the potential risks associated with AI chatbots targeting young and vulnerable users and grooming them into extremism. If these chatbots were programmed to promote terror ideologies, they could pose a severe threat of radicalising individuals and amplifying extremist narratives.[9]

However, Hall highlighted a significant legal challenge in prosecuting offenders using AI chatbots for extremist narratives because the UK’s current anti-terrorism laws do not expressly encompass AI chatbots. The gaps in respective legislations raise concerns about addressing and countering AI-driven criminal activities, including radicalisation.[10]

The Weaponisation of Unmanned Aerial Systems (UASs)

Unmanned Aerial Systems (UASs), commonly called drones, have experienced significant growth and utilisation across various industries. With technological advancements, drones have become more affordable, accessible, and sophisticated, offering numerous benefits and applications in agriculture, photography, and courier services. However, alongside their legal applications, there remains a growing concern regarding the potential misuse of drones by terrorist organisations. Using drones by terrorist organisations or Organised Crime Syndicates (OCS) is not recent. To deal with the imbalance of conventional combat, terrorist organisations have increasingly turned to consumer-market technology to develop or purchase small drones.

Despite their limited range of a few kilometres and small size, these drones pose a considerable concern. Such groups can transform these small and affordable consumer drones into what are commonly called “killer bees”, capable of wreaking significant damage and inciting fear among the population. The development of drones has provided terrorist organisations with a new tool to expand their capabilities and conduct asymmetrical warfare. Drones have various benefits that make them desirable to these groups. First, drones provide airborne reconnaissance, surveillance, and intelligence gathering. Terrorist organisations may employ drones to observe and surveil potential targets, acquire information, and plan strikes with greater precision. This vantage position allows them to find vulnerabilities, assess security measures, and design tactics to exploit gaps efficiently.

Moreover, they can be weaponised, threatening the Critical Infrastructure (CI) and other crucial establishments, including hospitals. Terrorist organisations can affix chemical agents, explosives, or other harmful payloads to drones, transforming them into lethal weapons. Using drones enables terrorists to bypass traditional security measures and launch attacks from unexpected angles, making it difficult to defend against such threats effectively. One such incident was noticed in 1994 when Aum Shinrikyo, a Japanese cult, conducted failed trials involving the release of sarin from a minicopter designed for aerosol crop spraying.[11] Aum Shinrikyo envisioned drones as a way to disseminate chemical or biological substances across a broader area, enhancing the volume and impact of their attacks. With this objective in mind, they attempted to convert commercially available minicopters, small UAVs commonly employed for agricultural purposes, into delivery systems for their deadly chemicals.

The accessibility and affordability of drones have played a significant role in their adoption by terrorist organisations. In recent years, the commercial availability of drones has increased, and their prices have significantly dropped, making them easily attainable. Most terror groups utilise civilian drones, mainly recreational and commercial UAVs, and a few Iranian mid-sized military drones.[12] They prefer civilian drones primarily because of the unique benefits they provide in terms of affordability, accessibility, and user-friendliness. Drones used by people in general, especially hobbyist models, are an appealing alternative for terrorist organisations. These drones are affordable for terrorist groups with low resources, with prices starting at a few hundred dollars. They offer them an affordable way to buy several drones, maximising operational potential while putting the most negligible financial load. The sophistication of drone technology has also contributed to their appeal to terrorist organisations. Modern drones have advanced features such as autonomous flight, long-range capabilities, and high-definition cameras. These capabilities enable terrorists to conduct complex operations with minimal human involvement, enhancing their operational security and reducing the risks associated with direct involvement. Furthermore, drones can be remotely controlled, allowing operators to remain safe from potential danger while maintaining real-time control over the drone’s movements and activities.

Incidents of Drone-Assisted Attacks

In 2021, twin explosions assisted by drones occurred in the Indian Air Force’s base in Jammu, which pointed to the involvement of the Pakistan-based terror outfit Lashkar-e-Taiba.[13] The first explosion severely damaged a single-storey building, while the second occurred in an open space on the ground. The investigations into the drone strike on the Indian Air Force Station, Jammu, have yielded substantial information concerning the nature of the attack. It was found that the culprits used low-flying drones to attack at night. This strategy prevented them from being detected and enabled their infiltration into the defence establishment's high-security zone. The drones were used precisely to deliver explosive payloads to their designated targets.

Hezbollah, an Iran-supported militant organisation based in Lebanon, is known for its widespread usage of drones, making it the violent non-State entity with the most extended history of drone use. Hezbollah’s drone programme has grown over time. They now reportedly have a modest fleet of Unmanned Aerial Vehicles (UAVs) that comprises Iranian-made drones like the Ababil and Mirsad-1 (an updated version of the early Iranian Mohajer drone used for reconnaissance of Iraqi troops during the 1980s Iran-Iraq War), as well as their own domestically created varieties.[14]

Hamas— a Palestinian militant organisation, has a long-standing strategic collaboration with Hezbollah and Iran that has expanded into technology realms. In 2021, a new threat from Hamas emerged when they released the video of the “Shehab” suicide drone, aloitering munition with built-in warheads.[15] However, the Israeli Defence Services (IDS) has claimed that their Iron Dome Rocket Defence (IDRD) system successfully intercepts the vast majority of incoming rockets, posing a threat to Israel’s cities.[16] Nevertheless, the terrorist organisation has modified their tactics to take advantage of the system’s flaws.[17]

On similar patterns, the Islamic State (IS) or Islamic State of Iraq and Syria (ISIS) began employing drones in 2013. In contrast to other terrorist organisations, neither advanced military capabilities nor governmental actors have contributed to IS's drone programme. It used easily accessible commercial technologies and Do-It-Yourself (DIY) modifications. However, IS developed a robust drone infrastructure, demonstrating an intensive and extensive use of drones among various non-State actors. Moreover, in January 2017, IS also reportedly published a declaration in its newsletter called "al-Naba" concerning the formation of a new division called the “Unmanned Aircraft of the Mujahideen.”[18] This specialised unit was established with the explicit purpose of designing UAS tailored for deployment in combat situations. This development demonstrates IS's dedication to advancing its drone capabilities and integrating them into its operational strategies.

In 2022, Saudi Arabia announced its ‘military operation’ in Yemen after the drone strikes carried out by the Houthi rebels targeted an oil depot in Jeddah and other facilities in Riyadh, Saudi Arabia.[19] The targeted facilities, particularly those of Aramco, officially known as the Saudi Arabian Oil group, are critical to the country’s economy and energy infrastructure. These attacks significantly impacted Saudi Arabia's oil production and exports and regional and global energy markets.[20]

Swarm Drone Attacks: From Fiction to a Reality

Swarm drone technology, once a distant concept confined to science fiction, is now a stark reality that could change the face of warfare. A swarm drone attack is a coordinated assault by many drones operating in unison. Swarm drone strikes, rather than depending on a single, larger aircraft, use the combined strength of several smaller drones to overwhelm and disrupt targets. These attacks could include reconnaissance and surveillance missions to more sinister acts like delivering bombs or chemical weapons. The capacity to use multiple drones simultaneously increases the potential impact on targets and, at the extreme, could result in mass casualties and widespread destruction.[21]

While terrorists can procure commercially available drones, commanding many drones in a coordinated fashion presents significant hurdles. Controlling multiple drones necessitates trained operators, a solid communication infrastructure, and a thorough understanding of drone technology. Furthermore, the complexity grows when attempting to combine several drones into a coherent weapon platform, establishing a drone swarm with inter-drone communication. Developing such capabilities involves significant technical competence and access to advanced equipment, which may be out of reach for many terrorist organisations. However, with rapid technological improvements and the ability of criminal networks to share knowledge, entry hurdles may be lowered in the future.

On 05 January 2018, the first-ever swarm drone attack on Russian forces stationed in Syria likely marked a critical turning point in the changing environment of non-State actors’ use of UAVs. This incident revealed how simple and unsophisticated drones might be used to inflict harm on well-equipped military personnel, ushering in a new era of asymmetric warfare and providing new difficulties to security services around the world. A swarm of 13 drones loaded with rudimentary bombs targeted the Russian military outpost in Syria during the strike. Seven of the drones were intercepted and destroyed by Russian defences, while the remaining six were successfully grounded without causing substantial damage.[22] Although there were no casualties or significant damage, the incident brought worldwide attention to the potential threat posed by swarm drones in the hands of non-state actors.


It is evident that terrorist organisations have improved their technological capabilities, and these developments refer to their innovative use of less advanced and easily accessible technology.[23] Moreover, emerging advanced and disruptive technologies will provide more additional capabilities to terrorists for inflicting damage. When it comes to AI technologies, it could be easier to access as mostly they may not be capital intensive, thus providing asymmetric advantages to terrorist organisations.[24] It will be difficult for federal agencies to respond to remote attacks by terrorists. Hence, it is essential to work out countermeasures to tackle threats from emerging AI technologies.

A total ban on AI proliferation is impossible as AI is developed primarily by the commercial sector rather than the governments. AI applications, such as writing-digital assistants for commercial use, cannot be banned altogether. However, bans on technologies that threaten people's livelihood are possible and likely. One such example is the possible use of Lethal Autonomous Weapon Systems (LAWS) which could, without human presence, select and fire on targets. Since 2014, the ethical and security challenges posed by Fully Autonomous Weapons (FAWs) have been the focus of international deliberations held under the auspices of the United Nations (UN) in accordance with the 1981 Convention on Certain Conventional Weapons (CCW). Such weapons which autonomously select the target and kill already exist; a UN report found that FAWs hunted down a retreating force in Libya. The system was programmed to attack targets without interaction between the operator and munition.[25]The United Nations discussion has not reached an agreeable conclusion; 28 governments support banning these weapons, while US and Russia block legally binding agreements.[26] Hence proper legal measures are required to tackle the issue of these weapons falling into the hands of terrorist organisations.

Moreover, developing and deploying automated algorithms for detecting deepfakes is a significant step towards mitigating the rise of deepfakes. The Defense Advanced Research Projects Agency (DARPA), a pioneer research and development wing of the US Department of Defense (DoD), invested heavily in detection technologies through two overlapping programmes: Media Forensics (MediFor), which ended in 2021, and Semantic Forensics (SemaFor). These programmes aimed to develop advanced technologies for detecting deepfake media, including images and videos. It focussed on creating algorithms and tools capable of analysing multimedia content to identify signs of manipulation and indicate its authenticity.[27]

Several countries, like India and China have recently criminalised using deepfakes for malicious purposes. China amended its criminal law to specifically address deepfakes, making their use for fraud and dissemination of false information punishable.[28] India has also introduced draft legislation to amend its Information Technology Act, aiming to criminalise the creation and distribution of harmful deepfake content.[29] To mitigate the impact of deepfakes, alongside sophisticated detection technology, enabling tools like reverse image search can assist journalists, fact-checkers, and average internet users in identifying distorted content, verifying authenticity, and combating disinformation. Integrating these technologies into public awareness campaigns encourages responsible media consumption, creating a more vigilant and informed online community. Continuous improvement and user education are critical for staying ahead of new deepfake approaches and fostering a complete approach to addressing deepfake difficulties in the digital era.[30]

When it comes to countering the hostile use of drones by terrorist organisations, there are no single countermeasures. The first countermeasures would be geofencing critical infrastructure and military bases. This measure prevents GPS-enabled drones from entering such areas.[31] Geofencing is a system that creates virtual boundaries around a physical location using GPS or Radio Frequency Identification (RFID). It intends to restrict the movement of drones.[32] The geofencing system sends radio signals which overwhelm the operator’s transmitter and causes the drone to land or return to the pre-programmed location.[33]

Another important countermeasure would be to deploy Anti-Drone System (ADS). ADS can quickly identify and jam micro drones and use a laser-based kill mechanism to destroy the target. Also, Global Navigation Satellite Systems (GNSS) jammers are deployed; these jammers are used along with Radio Frequency (RF) jammers which cause the UAS to land immediately. Apart from this, technologies like high-power microwave counter-drone systems are being developed, demonstrating taking down multiple drones. It uses electromagnetic radiation to destroy the internal electronics of drones within seconds.[34]


While the possibility of exploitation of AI-enabled capabilities by terrorist groups is still in its infancy, however, it is critical to be cognizant of developments in this field. To keep ahead of possible risks, such organisations' goal to harness developing technologies necessitates proactive steps. The rising public accessibility of AI raises concerns about its potential usage by terrorists, especially as AI continues to integrate into essential infrastructure.

The emergence of weaponised deepfake technology is a significant challenge since it has the potential to revolutionise deception by producing incredibly realistic and almost undetected fake audio and video recordings. These advanced deepfakes pose considerable hazards since they present challenges which are not easy to be defended against and lack defined escalation boundaries, relying on cognitive biases.

Furthermore, the increasing deployment of civilian-use drones raises various security challenges. With improved capabilities and increased availability, these drones have emerged as possible tools for hostile groups to carry out assaults and gather intelligence. The regulatory framework on drones continues to be complicated, needing a hierarchical approach to countermeasures, which include regulatory, passive, and active techniques. As non-state actors gain access to more sophisticated technologies, such as artificial intelligence and drones, the frequency and sophistication of such attacks are likely to increase. The idea of State actors using UAVs as proxies adds complexity and potential escalation to the dangerous picture.


[1] “Terrorists inciting people via fake news, J&K tells SC; opposes 4G internet in UT.” (2021, May 1). The Hindu. Retrieved July 10, 2023, from
[2]Cole. (2022, March 16). Hacked News Channel and Deepfake of Zelenskyy Surrendering Is Causing Chaos Online. Tech by Vice. Retrieved June 13, 2023, from
[3]Lis, & Olech. (2021, January 29). Technology and terrorism: Artificial Intelligence in the time of contemporary terrorist threats. Institute of New Europe.
[4]Braddock, K. (2020, April 16). Weaponized Words: The Strategic Role of Persuasion in Violent Radicalization and Counter-Radicalization.
[5]Tactics of Disinformation. (2022, October 18). In Cybersecurity & Infrastructure Security Agency. Retrieved July 18, 2023, from disinformation#:~:text=Disinformation%20actors%20use%20a%20variety,resilience%20when%20faced%20with%20disinformation.
[6]Harding. (2020, October 29). Rocket Chat a pillar of propaganda as ISIS adopts new marketing campaign strategy. The National News. Retrieved July 18, 2023, from
[7]Slack-like interface- not using due diligence
[8]King. (2019, April 10). Islamic State group’s experiments with the decentralised web. In Europol. European Counter Terrorism Centre. Retrieved July 18, 2023, from
[9]Townsend. (2023, June 4). AI poses national security threat, warns terror watchdog. The Guardian. Retrieved July 6, 2023, from
[10]Artificially intelligent chatbots ‘could encourage terrorism.’ (2023, April 10). Bang Showbiz English. Retrieved July 6, 2023, from
[11]Chávez ,& Swed, Dr. (2020). Off the Shelf: The Violent Nonstate Actor Drone Threat. AIR & SPACE POWER JOURNAL ,34(3).
[13]Drone attack: Initial probe hints at Lashkar role, says J&K Police chief. (2021, June 29). India Today. Retrieved July 11, 2023, from
[14]Hoeneg. (2014, May 6). Hezbollah And The Use Of Drones As A Weapon Of Terrorism. In Federation of American Scientists. Retrieved July 6, 2023, from
[15]Fabian. (2021, May 14). Hamas releases video of ‘Shehab’ kamikaze drone. The Times of Israel. Retrieved July 6, 2023, from
[16]Gaulkin. (2021, May 20). Drones add little to rocket-filled Israel-Palestine skies, but represent growing global threat. Bulletin of the Atomic Scientists. Retrieved July 8, 2023, from
[17]Iron Dome. (n.d.). The Jerusalem Post. Retrieved July 8, 2023, from
[18]Warrick. (2017, February 21). Use of weaponized drones by ISIS spurs terrorism fears. The Washington Post. Retrieved July 10, 2023, from
[19]Saudi Aramco’s Jeddah oil depot hit by Houthi attack. (2022, March 25). Al Jazeera. Retrieved July 11, 2023, from,of%20drones%E2%80%9D%2C%20he%20added.
[21]Kallenborn, Z., Ackerman, G., & Bleek, P. C. (2022, May 20). A Plague of Locusts? A Preliminary Assessment of the Threat of Multi-Drone Terrorism. Terrorism and Political Violence, 1–30.
[22]Reid. (2018, January 11). A swarm of armed drones attacked a Russian military base in Syria. CNBC. Retrieved July 11, 2023, from
[23]Kreps. (2021, November). DEMOCRATIZING HARM: ARTIFICIAL INTELLIGENCE IN THE HANDS OF NONSTATE ACTORS. In Foreign Policy at Brookings. The Brookings Institution. Retrieved July 13, 2023, from
[25]The Prohibition of Lethal Autonomous Weapon Systems. (2023, June 3). In European Greens. Retrieved July 15, 2023, from
[26]Brzozowski. (2019, November 22). No progress in UN talks on regulating lethal autonomous weapons. Euractiv. Retrieved July 21, 2023, from
[27]Engler. (2019, November 14). Fighting deepfakes when detection fails. In Brookings. Retrieved July 15, 2023, from
[28]Statt. (2019, November 30). China makes it a criminal offense to publish deepfakes or fake news without disclosure. The Verge. Retrieved July 16, 2023, from
[29]Jha, P., & Jain, S. (2023). Detecting and Regulating Deepfakes in India: A Legal and Technological Conundrum. SSRN Electronic Journal.
[30]Op. Cit. 27
[31]Pledger. (2021, February). The Role of Drones in Future Terrorist Attacks. Land Warfare Paper 137.
[32]Droning on about geofencing: A deep dive into the world of drone restrictions and boundaries. (2023, January 3). In Xdynamics. Retrieved July 16, 2023, from
[34]Crino, Dr., & Dreby. (n.d.). Drone Attacks Against Critical Infrastructure: A Real and Present Threat. Atlantic Council.

(The paper is the author’s individual scholastic articulation. The author certifies that the article/paper is original in content, unpublished and it has not been submitted for publication/web upload elsewhere, and that the facts and figures quoted are duly referenced, as needed, and are believed to be correct). (The paper does not necessarily represent the organisational stance... More >>

Image Source:

Post new comment

The content of this field is kept private and will not be shown publicly.
9 + 4 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Contact Us