Indrajit Ray

Navigation

News

Call for Papers (Security Conferences)

Links

 

Selected Projects (Past and Present)



Cyber Attack Classification and Simulation of Nuclear Power Plant Simulation Framework (on going)

This interdisciplinary project with researchers in nuclear engineering investigates cyber security issues in the nuclear power industry. Nuclear power plant (NPP) operations are covered by a series of procedures that predicate what operators will do and therefore will observe. Events come and go which are diagnosed through the filters set by the procedures. However, often it is difficult to distinguish between failures and cyber-attacks by observing the events occurring in the nuclear facility. This work develops methodologies to help classify cyber-attacks on nuclear power plant into different types. Such a classification would allow a plant operator to differentiate between security and safety related events and take more relevant actions to better handle the situation. The major deliverables are (i) a database of cyber-attacks that would help in attack classification, (ii) methodology to classify events into cyber attacks or system failures and categorize the set of attacks into different types, and (iii) toolset to simulate attack in the broader NPP simulation framework.
Department of Energy
Intelligent Agents for Protecting Users in Cyberspace (on going)

This interdisciplinary project studies the nature of the risks inherent in normal activity on the Internet, the perception of those risks, the judgment about trade-offs in behavior and the design of a personalized agent that can alert users to risky behavior and help to protect them. The key insight is that adequate security and privacy protection requires the concerted efforts of both the computer and the user. The interdisciplinary research team combines expertise from psychology, computer security and artificial intelligence to propose MIPA (MIxed Initiative Protective Agent) - a semi-autonomous, intelligent and personalized agent approach that leverages psychological studies of what users want/need and what security and privacy risks are imminent. The techniques will be developed for and tested on a real problem that challenges the current state of the art in artificial intelligence, security and user models.

As it is becoming increasingly difficult for users to protect themselves and understand the risks they are taking on the Internet, this project has the potential to positively impact system design to effectively enhance user security. Focusing on home computer users (college students and senior citizens), the proposed research will investigate how they perceive, use and can best be served by Internet application software. Results could improve the experiences of these users as well as significantly advance techniques in intelligent agents and computer security. Additionally, because home users and machines tend to be the weak link in security, protecting them may better protect others.

National Science
                    Foundation
Addressing Security Challenges in Pervasive Computing Environments

Pervasive computing is an emerging paradigm that uses numerous, casually accessible, often invisible computing and sensor devices, that are frequently mobile or embedded in the environment and that are inter-connected to each other with wireless or wired technology. Being embedded in the environment and strongly interconnected, allow pervasive computing devices to exploit knowledge about the operating environment in a net-centric manner. Thus, they provide a rich new set of services and functionalities that are not possible through conventional means.

Although pervasive computing technology looks promising, one critical challenge needs to be addressed before it can be widely deployed -- security. The very knowledge that enables a pervasive computing application to provide better services and functionalities may easily be misused, causing security breaches. The problem is serious because pervasive computing applications involve interactions between a large number of entities that can span different organizational boundaries. Unlike traditional applications, these applications do not usually have well-defined security perimeter and are dynamic in nature. Moreover, these applications use knowledge of surrounding physical spaces. This requires security policies to use contextual information that, in turn, must be adequately protected from security breaches. Uncontrolled disclosure of information or unconstrained interactions among entities can lead to very serious consequences. Traditional security policies and mechanisms rarely address these issues and are thus inadequate for securing pervasive computing applications.

This work seeks to develop a new model and framework for securing pervasive computing applications. It proposes new security policies and models and shows how these can be used to design such applications. The first step is to identify the policies needed in a pervasive computing environment and to develop models that formalize their syntax and semantics. Unlike traditional policy models where the subjects are known a priori, pervasive computing applications may need to interact with entities who are not completely trusted. Therefore, the second step is to formalize a suitable trust model and develop strategies for establishing trust between entities. The model must accomodate the notion of different degrees of trust, identify how to determine the trust value, and define how trust changes over time. The trust negotiation strategies must take into account the constraints imposed by pervasive computing applications, such as timing constraints. The third and final step is to use the models developed previously and design secure pervasive computing applications.

Air Force Office of Scientific Research
A Model of Trust for Developing Trustworthy Systems from Untrustworthy Actors

In the present world of information exchange, numerous heterogeneous but cooperative agents are involved in a globally connected network. The locational and operational diversity of these agents make confidentiality, integrity and availability of systems and information resources increasingly critical in our everyday life. To protect such resources and to ensure that they behave according to stated requirements, it is therefore important that we are able to determine the appropriate security policies. The notion of trust plays a crucial role for the proper formulation of security policies since we expect agents and systems to work according to our sociological expectation of trust in terms of confidentiality, integrity and availability.

            Almost all existing models of trust, that allow reasoning about trust relationships, take a binary view of trust - complete trust or no trust at all. This prevents one from rationally evaluating the trust in systems that are composed of different sub-systems each of which are either trusted or not trusted.

            Consider, for example, the operational information base in a large corporation. Typically, this is generated with the accumulation of information from several sources. Some of these sources are under the direct administrative control of the corporation and thus are considered trustworthy. Other sources are “friendly” sources and information originating directly from them are also considered trustworthy. However, these “friendly” sources may have derived information from their own sources which the corporation does not have any first hand knowledge about. If such third-hand information is made available to the corporation, then the corporation has no real basis for determining the quality (in terms of trustworthiness) of that information. It will be rather naive for the corporation to trust this information to the same extent that it trusts information from sources under its direct control. Similarly not trusting this information at all will be too simplistic. The existing binary models of trust where trust has only two values, “no trust” and “complete trust” will, nonetheless, categorize the trust value to one of these two levels. Hence the following questions can not be answered satisfactory.

  1. Can the composite information be trusted at all?

  2. If it can be trusted, should there be any constraint on the trust or should be it complete, unconstrained trust?

Therefore, there is a need to have a formal model of trust which

  1. is more inclusive than the current binary model.

  2. has a notion of degrees of trust.

  3. has procedures to compare information at different degrees of trust.

  4. has procedure for trust composability, that is, define methods that allow one to combine information belonging to different degrees of trust and determine the degree of trust of the resulting information.

  5. has processes and procedures to establish and manage trust.


Federal
                    Aviation Administration Air Force
                    Research Laboratory
A Framework for Secure and Survivable Transaction Processing

This research is concerned with developing a model for secure and survivable, yet flexible, transaction processing. Special emphasis will be on the integrity and availability issues of transactions, although confidentiality issues will not be ignored. The outcome of this research will be a flexible framework for transaction processing, based on a tool-kit approach. The framework assists the developer in designing secure, complex applications that can survive malicious attacks and other system failures. The model allows the developer to

  • express complex transaction dependencies in a secure and reliable manner
  • provide authentication expressions within a transaction body that allows component subtransactions to interact with each other in an authenticated and secure manner
  • analyze the dependencies to identify if they can be exploited to launch attacks against the transaction, and if so, define remedial actions within the transaction
  • provide customized (geared to the specific application) resistance, recognition and recovery techniques within the transaction so as to survive malicious attacks

The proposed research effort is structured into the following research activities.

  1. Investigate how the different types of data and control-flow dependencies in extended transaction models impact secure transaction processing with respect to integrity and availability.
  2. Investigate how the different types of control-flow, data and external dependencies interact with each other to affect security.
  3. Formalize the notion of a {\em well-behaved} transaction. Informally, a well-behaved transaction is one that survives and/or gracefully degrades under attack.
  4. Develop the Secure Multiform Transaction model as tool-kit approach to implementing well-behaved transactions.
  5. Propose a set of transaction primitives to express resistance, recognition and recovery procedures within a secure multiform transaction.
  6. Develop a proof-of-concept prototype for the Secure Multiform Transaction model from COTS components.

The proposed research advances the current state-of-the-art in secure transaction processing. This research is significant because it will produce results that can be used to develop complex yet secure and easily deployable transactions. Such transactions find application in a variety of different areas -- communications, finance, electronic commerce, manufacturing, process control and office automation, to name a few. These applications are characterized by their need for complex coordination among different components and (often) long duration -- two properties that impose substantial integrity, availability and confidentiality requirements that are yet to be addressed by the research community.

National Science
                    Foundation
Towards a Proactive Approach to Defense Against and Recovery from Cyber Attacks

The classical approach towards defending against cyber attacks has been to identify intrusions or anomalies as best as possible and then take appropriate security control actions to mitigate the effects of such attacks. However, such approach is severely constrained because, it is after all an after-the-fact effort. Moreover, it does not allow a graceful degradation of service for mission survivability after an attack has been identified. This is because modern day DDoS attacks occur too fast to provide a window of opportunity for launching mitigating services.

This project proposes a pro-active approach that is based on (a) having a comprehensive knowledge-base of all paths that can potentially be exploited for an attack on a system, and cost estimates of resulting damage, and (b) monitoring system events to estimate the probability of security attacks happening in the future. In this manner, the approach provides enough opportunity for contingency planning. In particular, the approach allows (a) re-allocating defensive resources in a timely manner to assist in data collection by logging activities aggressively, saving system states more frequently and more comprehensively, and initiating recovery activities by coordinating with other monitors, (b) establishing a multi-layered defensive framework in real time including ways to isolate and contain attacks, and (c) re-distributing essential services to other safer portions of the network to allows mission survivability.  
Air Force
                    Research Laboratory