Management information system:
A management information system (MIS) is a subset of the overall internal controls of a business covering the application of people, documents, technologies, and procedures by management accountants to solve business problems such as costing a product, service or a business-wide strategy. Management information systems are distinct from regular information systems in that they are used to analyze other information systems applied in operational activities in the organization. Academically, the term is commonly used to refer to the group of information management methods tied to the automation or support of human decision making, e.g. Decision Support Systems, Expert systems, and Executive information systems.
It has been described as, "MIS 'lives' in the space that intersects technology and business. MIS combines tech with business to get people the information they need to do their jobs better/faster/smarter. Information is the lifeblood of all organizations - now more than ever. MIS professionals work as systems analysts, project managers, systems administrators, etc., communicating directly with staff and management across the organization."
1 Overview
2 See also
3 References
4 External links
At the start, in businesses and other organizations, internal reporting was made manually and only periodically, as a by-product of the accounting system and with some additional statistic(s), and gave limited and delayed the information on management performance. Previously, data had to be separated individually by the people as per the requirement and necessity of the organization. Later, data and information was distinguished and instead of the collection of mass of data, important and to the point on that data that is needed by the organization and was stored.
In their infancy, business computers were used for the practical business of computing the payroll and keeping track of accounts payable and accounts receivable. As applications were developed that provided managers with information about sales, inventories, and other data that would help in managing the enterprise, the term "MIS" arose to describe these kinds of applications. Today, the term is used broadly in a number of contexts and includes (but is not limited to): decision support systems, resource and people management applications, ERP, SCM, CRM, project management and database retrieval application.
{{cite journal |quotes= |last= ==Definition== An 'MIS' is a planned system of the collecting, processing, storing and disseminating data in the form of information needed to carry out the functions of management. In a way it is a documented report of the activities those were planned and executed. According to Philip Kotler "A marketing information system consists of people, equipment, and procedures to gather, sort, analyze, evaluate, and distribute needed, timely, and accurate information to marketing decision makers."
The terms MIS and information system are often confused. Information systems include systems that are not intended for decision making. The area of study called MIS is sometimes referred to, in a restrictive sense, as information technology management. That area of study should not be confused with computer science. IT service management is a practitioner-focused discipline. MIS has also some differences with Enterprise Resource Planning (ERP) as ERP incorporates elements that are not necessarily focused on decision support.
Professor Allen S. Lee states that "...research in the information systems field examines more than the technological system, or just the social system, or even the two side by side; in addition, it investigates the phenomena that emerge when the two interact."
MANAGEMENT INFORMATION SYSTEM is defined as:-
1) Provides information support for decision making in the organization
2) MIS is an integrated system of man and machine for providing the information to support the operation.
3) MIS is defined as a computer based information system.
See also
Bachelor of Computer Information Systems:
Computing
Management
Business Intelligence
Business Performance Management
Business rules
Data Mining
Predictive analytics
Purchase order request
Enterprise Information System
Enterprise Architecture
Information technology governance
Information technology management
Knowledge management
Management by objectives
Online analytical processing
Online office suite
Information Technology
Bachelor of Computer Information Systems Bachelor of Computer Information Systems:
The Bachelor of Computer Information Systems is a bachelor's degree, similar to the Bachelor of Science in Information Technology and Bachelor of Computer Science, but focused more on practical applications of technology to support organizations while adding value to their offerings. In order to apply technology effectively in this manner, a broad range of subjects are covered, such as communications, business, networking, software design, and mathematics. Some BCIS programs offer minors or concentrations as options to the degree program. Some computer information systems programs have received accreditation from ABET, the recognized U.S. accredit or of college and university programs in applied science, computing, engineering, and technology.
Computing: For the formal concept of computation, see computation. For the magazine, see Computing (magazine). For the scientific journal, see computing (journal).
RAM (Random Access Memory) is a hardware component Computing is usually defined as the activity of using and improving computer technology, computer hardware and software. It is the computer-specific part of information technology. Computer science (or computing science) is the study and the science of the theoretical foundations of information and computation and their implementation and application in computer systems.
Computing Curricula 2005 defined computing: In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing an building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast. A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form enables a programmer to study and develop the algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.
Management:
For other uses, see Management (disambiguation).Management in all business and human organization activity is the act of getting people together to accomplish desired goals and objectives. Management comprises planning, organizing, staffing, leading or directing, and controlling an organization (a group of one or more people or entities) or effort for the purpose of accomplishing a goal. Renouncing encompasses the deployment and manipulation of human resources, financial resources, technological resources, and natural resources.
Management can also refer to the person or people who perform the act(s) of management.
History:
The verb manage comes from the Italian manager (to handle — especially tools), which in turn derives from the Latin manus (hand). The French word management (later management) influenced the development in meaning of the English word management in the 17th and 18th centuries.
Some definitions of management are:
Organization and coordination of the activities of an enterprise in accordance with certain policies and in achievement of clearly defined objectives. Management is often included as a factor of production along with machines, materials, and money. According to the management guru Peter Ducker (1909–2005), the basic task of a management is twofold: marketing and innovation.
Directors and managers who have the power and responsibility to make decisions to manage an enterprise. As a discipline, management comprises the interlocking functions of formulating corporate policy and organizing, planning, controlling, and directing the firm's resources to achieve the policy's objectives. The size of management can range from one person in a small firm to hundreds or thousands of managers in multinational companies. In large firms the board of directors formulates the policy which is implemented by the chief executive officer.
Theoretical scope
Mary Parker Follett (1868–1933), who wrote on the topic in the early twentieth century, defined management as "the art of getting things done through people". She also described management as philosophy.[2] One can also think of management functionally, as the action of measuring a quantity on a regular basis and of adjusting some initial plan; or as the actions taken to reach one's intended goal. This applies even in situations where planning does not take place. From this perspective, Frenchman Henri Fayol[3] considers management to consist of seven
functions:
Planning
Organizing
Leading
Coordinating
Controlling
Staffing
Motivating
Some people, however, find this definition, while useful, far too narrow. The phrase "management is what managers do" occurs widely, suggesting the difficulty of defining management, the shifting nature of definitions, and the connection of managerial practices with the existence of a managerial cadre or class.
One habit of thought regards management as equivalent to "business administration" and thus excludes management in places outside commerce, as for example in charities and in the public sector. More realistically, however, every organization must manage its work, people, processes, technology, etc. in order to maximize its effectiveness. Nonetheless, many people refer to university departments which teach management as "business schools." Some institutions (such as the Harvard Business School) use that name while others (such as the Yale School of Management) employ the more inclusive term "management."
English speakers may also use the term "management" or "the management" as a collective word describing the managers of an organization, for example of a corporation. Historically this use of the term was often contrasted with the term "Labor" referring to those being managed.
Nature of managerial work:
In for-profit work, management has as its primary function the satisfaction of a range of stakeholders. This typically involves making a profit (for the shareholders), creating valued products at a reasonable cost (for customers), and providing rewarding employment opportunities (for employees). In nonprofit management, add the importance of keeping the faith of donors. In most models of management/governance, shareholders vote for the board of directors, and the board then hires senior management. Some organizations have experimented with other methods (such as employee-voting models) of selecting or reviewing managers; but this occurs only very rarely.
In the public sector of countries constituted as representative democracies, voters elect politicians to public office. Such politicians hire many managers and administrators, and in some countries like the United States political appointees lose their jobs on the election of a new president/governor/mayor.
Historical development:
Difficulties arise in tracing the history of management. Some see it (by definition) as a late modern (in the sense of late modernity) conceptualization. On those terms it cannot have a pre-modern history, only harbingers (such as stewards). Others, however, detect management-like-thought back to Sumerian traders and to the builders of the pyramids of ancient Egypt. Slave-owners through the centuries faced the problems of exploiting/motivating a dependent but sometimes unenthusiastic or recalcitrant workforce, but many pre-industrial enterprises, given their small scale, did not feel compelled to face the issues of management systematically. However, innovations such as the spread of Arabic numerals (5th to 15th centuries) and the codification of double-entry book-keeping (1494) provided tools for management assessment, planning and control.
Given the scale of most commercial operations and the lack of mechanized record-keeping and recording before the industrial revolution, it made sense for most owners of enterprises in those times to carry out management functions by and for themselves. But with growing size and complexity of organizations, the split between owners (individuals, industrial dynasties or groups of shareholders) and day-to-day managers (independent specialists in planning and control) gradually became more common.
Data mining :(Redirected from Data Mining)"KDD" redirects here. For the Japanese telecommunications company, see KDDI.Not to be confused with information extraction. Data mining is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.
Data mining can be used to uncover patterns in data but is often carried out only on samples of data. The mining process will be ineffective if the samples are not a good representation of the larger body of data. Data mining cannot discover patterns that may be present in the larger body of data if those patterns are not present in the sample being "mined". Inability to find patterns may become a cause for some disputes between customers and service providers. Therefore data mining is not fool proof but may be useful if sufficiently representative data samples are collected. The discovery of a particular pattern in a particular set of data does not necessarily mean that a pattern is found elsewhere in the larger data from which that sample was drawn. An important part of the process is the verification and validation of patterns on other samples of data.
The term data mining has also been used to describe data dredging and data snooping. However, dredging and snooping can be (and sometimes are) used as exploratory tools when developing and clarifying hypotheses
Background:
Humans have been "manually" extracting patterns from data for centuries, but the increasing volume of data in modern times has called for more automated approaches. Early methods of identifying patterns in data include Bayes' theorem (1700s) and Regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection and storage. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees(1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns.[1] It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining).
A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analysed may not be representative of the whole domain, and therefore may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for human-generated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
There have been some efforts to define standards for data mining, for example the 1999 European Cross Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining standard (JDM 1.0). These are evolving standards; later versions of these standards are under development. Independent of these standardization efforts, freely available open-source software systems like the R Project, Weka, KNIME, RapidMiner and others have become an informal standard for defining data-mining processes. The first three of these systems are able to import and export models in PMML (Predictive Model Markup Language) which provides a standard way to represent data mining models so that these can be shared between different statistical applications. PMML is an XML-based language developed by the Data Mining Group (DMG)[2], an independent group composed of many data mining companies. PMML version 4.0 was released in June 2009.
Research and evolution;
In addition to industry driven demand for standards and interoperability, professional and academic activity have also made considerable contributions to the evolution and rigour of the methods and models; an article published in a 2008 issue of the International Journal of Information Technology and Decision Making summarises the results of a literature survey which traces and analyses this evolution.
The premier professional body in the field is the Association for Computing Machinery's Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD).[citation needed] Since 1989 they have hosted an annual international conference and published its proceedings,and since 1999 have published a biannual academic journal titled "SIGKDD Explorations".Other Computer Science conferences on data mining include:DMIN - International Conference on Data Mining;DMKD - Research Issues on Data Mining and Knowledge Discovery;ECML-PKDD - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases;ICDM - IEEE International Conference on Data Mining; MLDM - Machine Learning and Data Mining in Pattern Recognition’s - SIAM International Conference on Data MiningProcess
This section relies largely or entirely upon a single source. Please help improve this article by introducing appropriate citations of additional sources.
Knowledge Discovery in Databases (KDD) is the name coined by Gregory Piatetsky-Shapiro in 1989 to describe the process of finding interesting, interpreted, useful and novel data. There are many nuances to this process, but roughly the steps are to preprocess raw data, mine the data, and interpret the results Pre-processing
Once the objective for the KDD process is known, a target data set must be assembled. As data mining can only uncover patterns already present in the data, the target dataset must be large enough to contain these patterns while remaining concise enough to be mined in an acceptable timeframe. A common source for data is a data mart or data warehouse.
The target set is then cleaned. Cleaning removes the observations with noise and missing data.
The clean data are reduced into feature vectors, one vector per observation. A feature vector is a summarized version of the raw data observation. For example, a black and white image of a face which is 100px by 100px would contain 10,000 bits of raw data. This might be turned into a feature vector by locating the eyes and mouth in the image. Doing so would reduce the data for each vector from 10,000 bits to three codes for the locations, dramatically reducing the size of the dataset to be mined, and hence reducing the processing effort. The feature(s) selected will depend on what the objective(s) is/are; obviously, selecting the "right" feature(s) is fundamental to successful data mining.
The feature vectors are divided into two sets, the "training set" and the "test set". The training set is used to "train" the data mining algorithm(s), while the test set is used to verify the accuracy of any patterns found.
Data mining
Data mining commonly involves four classes of task:
Classification - Arranges the data into predefined groups. For example an email program might attempt to classify an email as legitimate or spam. Common algorithms include Decision Tree Learning, nearest neighbor, naive Bayesian classification and neural network.
Clustering - Is like classification but the groups are not predefined, so the algorithm will try to group similar items together.
Regression - Attempts to find a function which models the data with the least error.
Association rule learning - Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
See also structured data analysis.
Results validation:
The final step of knowledge discovery from data is to verify the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the data mining algorithms are necessarily valid. It is common for the data mining algorithms to find patterns in the training set which are not present in the general data set, this is called over fitting. To overcome this, the evaluation uses a test set of data which the data mining algorithm was not trained on. The learnt patterns are applied to this test set and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish spam from legitimate emails would be trained on a training set of sample emails. Once trained, the learnt patterns would be applied to the test set of emails which it had not been trained on, the accuracy of these patterns can then be measured from how many emails they correctly classify. A number of statistical methods may be used to evaluate the algorithm such as ROC curves.
If the learnt patterns do not meet the desired standards, then it is necessary to reevaluate and change the preprocessing and data mining. If the learnt patterns do meet the desired standards then the final step is to interpret the learnt patterns and turn them into knowledge.
Notable uses Games
Since the early 1960s, with the availability of oracles for certain combinatorial games, also called table bases (e.g. for 3x3-chess) with any beginning configuration, small-board dots-and-boxes, small-board-hex, and certain endgames in chess, dots-and-boxes, and hex; a new area for data mining has been opened up. This is the extraction of human-usable strategies from these oracles. Current pattern recognition approaches do not seem to fully have the required high level of abstraction in order to be applied successfully. Instead, extensive experimentation with the table bases, combined with an intensive study of table base-answers to well designed problems and with knowledge of prior art, i.e. pre-table base knowledge, is used to yield insightful patterns. Berlekamp in dots-and-boxes etc. and John Nunn in chess endgames are notable examples of researchers doing this work, though they were not and are not involved in table base generation .Business Data mining in customer relationship management applications can contribute significantly to the bottom line.[citation needed] Rather than randomly contacting a prospect or customer through a call center or sending mail, a company can concentrate its efforts on prospects that are predicted to have a high likelihood of responding to an offer. More sophisticated methods may be used to optimize resources across campaigns so that one may predict which channel and which offer an individual is most likely to respond to — across all potential offers. Additionally, sophisticated applications could be used to automate the mailing. Once the results from data mining (potential prospect/customer and channel/offer) are determined, this "sophisticated application" can either automatically send an e-mail or regular mail. Finally, in cases where many people will take an action without an offer, uplift modeling can be used to determine which people will have the greatest increase in responding if given an offer. Data clustering can also be used to automatically discover the segments or groups within a customer data set.
Businesses employing data mining may see a return on investment, but also they recognize that the number of predictive models can quickly become very large. Rather than one model to predict which customers will churn, a business could build a separate model for each region and customer type. Then instead of sending an offer to all people that are likely to churn, it may only want to send offers to customers that will likely take to offer. And finally, it may also want to determine which customers are going to be profitable over a window of time and only send the offers to those that are likely to be profitable. In order to maintain this quantity of models, they need to manage model versions and move to automated data mining.
Data mining can also be helpful to human-resources departments in identifying the characteristics of their most successful employees. Information obtained, such as universities attended by highly successful employees, can help HR focus recruiting efforts accordingly. Additionally, Strategic Enterprise Management applications help a company translate corporate-level goals, such as profit and margin share targets, into operational decisions, such as production plans and workforce levels.
Another example of data mining, often called the market basket analysis, relates to its use in retail sales. If a clothing store records the purchases of customers, a data-mining system could identify those customers who favour silk shirts over cotton ones. Although some explanations of relationships may be difficult, taking advantage of it is easier. The example deals with association rules within transaction-based data. Not all data are transaction based and logical or inexact rules may also be present within a database. In a manufacturing application, an inexact rule may state that 73% of products which have a specific defect or problem will develop a secondary problem within the next six months.
Market basket analysis has also been used to identify the purchase patterns of the Alpha consumer. Alpha Consumers are people that play a key roles in connecting with the concept behind a product, then adopting that product, and finally validating it for the rest of society. Analyzing the data collected on these type of users has allowed companies to predict future buying trends and forecast supply demands.
Data Mining is a highly effective tool in the catalog marketing industry. Catalogers have a rich history of customer transactions on millions of customers dating back several years. Data mining tools can identify patterns among customers and help identify the most likely customers to respond to upcoming mailing campaigns.
Related to an integrated-circuit production line, an example of data mining is described in the paper "Mining IC Test Data to Optimize VLSI Testing."[12] In this paper the application of data mining and decision analysis to the problem of die-level functional test is described. Experiments mentioned in this paper demonstrate the ability of applying a system of mining historical die-test data to create a probabilistic model of patterns of die failure which are then utilized to decide in real time which die to test next and when to stop testing. This system has been shown, based on experiments with historical test data, to have the potential to improve profits on mature IC products.
Science and engineering
In recent years, data mining has been widely used in area of science and engineering, such as bioinformatics, genetics, medicine, education and electrical power engineering.
In the area of study on human genetics, the important goal is to understand the mapping relationship between the inter-individual variation in human DNA sequences and variability in disease susceptibility. In lay terms, it is to find out how the changes in an individual's DNA sequence affect the risk of developing common diseases such as cancer. This is very important to help improve the diagnosis, prevention and treatment of the diseases. The data mining technique that is used to perform this task is known as multifactor dimensionality reduction.
In the area of electrical power engineering, data mining techniques have been widely used for condition monitoring of high voltage electrical equipment. The purpose of condition monitoring is to obtain valuable information on the insulation's health status of the equipment. Data clustering such as self-organizing map (SOM) has been applied on the vibration monitoring and analysis of transformer on-load tap-changers (OLTCS). Using vibration monitoring, it can be observed that each tap change operation generates a signal that contains information about the condition of the tap changer contacts and the drive mechanisms. Obviously, different tap positions will generate different signals. However, there was considerable variability amongst normal condition signals for the exact same tap position. SOM has been applied to detect abnormal conditions and to estimate the nature of the abnormalities.
Data mining techniques have also been applied for dissolved gas analysis (DGA) on power transformers. DGA, as a diagnostics for power transformer, has been available for many years. Data mining techniques such as SOM has been applied to analyze data and to determine trends which are not obvious to the standard DGA ratio techniques such as Duval Triangle.
A fourth area of application for data mining in science/engineering is within educational research, where data mining has been used to study the factors leading students to choose to engage in behaviors which reduce their learning[15] and to understand the factors influencing university student retention.[16]. A similar example of the social application of data mining its is use in expertise finding systems, whereby descriptors of human expertise are extracted, normalized and classified so as to facilitate the finding of experts, particularly in scientific and technical fields. In this way, data mining can facilitate Institutional memory.
Other examples of applying data mining technique applications are biomedical data facilitated by domain anthologies, mining clinical trial data,[ traffic analysis using SOM, et cetera.
In adverse drug reaction surveillance, the Uppsala Monitoring Centre has, since 1998, used data mining methods to routinely screen for reporting patterns indicative of emerging drug safety issues in the WHO global database of 4.6 million suspected adverse drug reaction incidents . Recently, similar methodology has been developed to mine large collections of electronic health records for temporal patterns associating drug prescriptions to medical diagnoses, Spatial Data mining
Spatial data mining is the application of data mining techniques to spatial data. Spatial data mining follows along the same functions in data mining, with the end objective to find patterns in geography. So far, data mining and Geographic Information Systems (GIS) have existed as two separate technologies, each with its own methods, traditions and approaches to visualization and data analysis. Particularly, most contemporary GIS have only very basic spatial analysis functionality. The immense explosion in geographically referenced data occasioned by developments in IT, digital mapping, remote sensing, and the global diffusion of GIS emphasizes the importance of developing data driven inductive approaches to geographical analysis and modeling.
Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision-making. Recently, the task of integrating these two technologies has become critical, especially as various public and private sector organizations possessing huge databases with thematic and geographically referenced data begin to realise the huge potential of the information hidden there. Among those organizations are: offices requiring analysis or dissemination of geo-referenced statistical data public health services searching for explanations of disease clusters environmental agencies assessing the impact of changing land-use patterns on climate change geo-marketing companies doing customer segmentation based on spatial location.
Challenges
Geospatial data repositories tend to be very large. Moreover, existing GIS datasets are often splintered into feature and attribute components that are conventionally archived in hybrid data management systems. Algorithmic requirements differ substantially for relational (attribute) data management and for topological (feature) data management [22]. Related to this is the range and diversity of geographic data formats that also presents unique challenges. The digital geographic data revolution is creating new types of data formats beyond the traditional "vector" and "raster" formats. Geographic data repositories increasingly include ill-structured data such as imagery and geo-referenced multi-media.
There are several critical research challenges in geographic knowledge discovery and data mining. Miller and Han offer the following list of emerging research
topics in the field:
Developing and supporting geographic data warehouses - Spatial properties are often reduced to simple a spatial attributes in mainstream data warehouses. Creating an integrated GDW requires solving issues in spatial and temporal data interoperability, including differences in semantics, referencing systems, geometry, accuracy and position.
Better spatial-temporal representations in geographic knowledge discovery - Current geographic knowledge discovery (GKD) techniques generally use very simple representations of geographic objects and spatial relationships. Geographic data mining techniques should recognized more complex geographic objects (lines and polygons) and relationships (non-Euclidean distances, direction, connectivity and interaction through attributed geographic space such as terrain). Time needs to be more fully integrated into these geographic representations and relationships.
Geographic knowledge discovery using diverse data types - GKD techniques should be developed that can handle diverse data types beyond the traditional raster and vector models, including imagery and geo-referenced multimedia, as well as dynamic data types (video streams, animation).Surveillance Previous data mining to stop terrorist programs under the U.S. government include the Total Information Awareness (TIA) program, Secure Flight (formerly known as Computer-Assisted Passenger Prescreening System (CAPPS II)), Analysis, Dissemination, Visualization, Insight, Semantic Enhancement (ADVISE[25]), and the Multistage Anti-Terrorism Information Exchange (MATRIX).[26] These programs have been discontinued due to controversy over whether they violate the US Constitution's 4th amendment, although many programs that were formed under them continue to be funded by different organizations, or under different names.
Two plausible data mining techniques in the context of combating terrorism include "pattern mining" and "subject-based data mining".
Pattern mining
"Pattern mining" is a data mining technique that involves finding existing patterns in data. In this context patterns often means association rules. The original motivation for searching association rules came from the desire to analyze supermarket transaction data, that is, to examine customer behavior in terms of the purchased products. For example, an association rules "beer => crisps (80%)" states that four out of five customers that bought beer also bought crisps.
In the context of pattern mining as a tool to identify terrorist activity, the National Research Council provides the following definition: "Pattern-based data mining looks for patterns (including anomalous data patterns) that might be associated with terrorist activity — these patterns might be regarded as small signals in a large ocean of noise. Pattern Mining includes new areas such a Music Information Retrieval (MIR) where patterns seen both in the temporal and non temporal domains are imported to classical knowledge discovery search techniques.
Subject-based data mining
"Subject-based data mining" is a data mining technique involving the search for associations between individuals in data. In the context of combating terrorism, the National Research Council provides the following definition: "Subject-based data mining uses an initiating individual or other datum that is considered, based on other information, to be of high interest, and the goal is to determine what other persons or financial transactions or movements, etc., are related to that initiating datum. Privacy concerns and ethics Some people believe that data mining itself is ethically neutral.However; the ways in which data mining can be used can raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns. Data mining requires data preparation which can uncover information or patterns which may compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation is when the data are accrued, possibly from various sources, and put together so that they can be analyzed. This is not data mining per se, but a result of the preparation of data before and for the purposes of the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly-compiled data set, to be able to identify specific individuals, especially when originally the data were anonymous.
It is recommended that an individual is made aware of the following before data are collected: the purpose of the data collection and any data mining projects, how the data will be used, who will be able to mine the data and use them, the security surrounding access to the data, and in addition, how collected data can be updated Privacy concerns have also been somewhat addressed by congress via the passage of regulatory controls such as HIPAA. The Health Insurance Portability and Accountability Act (HIPAA) requires individuals to be given "informed consent" regarding any information that they provide and its intended future uses by the facility receiving that information. According to an article in Biotech Business Week, “In practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena, says the AAHC. More importantly, the rule's goal of protection through informed consent is undermined by the complexity of consent forms that are required of patients and participants, which approach a level of incomprehensibility to average individuals. This underscores the necessity for data anonymity in data aggregation practices.
One may additionally modify the data so that they are anonymous, so that individuals may not be readily identified. However, even de-identified data sets can contain enough information to identify individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.
Information technology management
The definition of Information Technology Management, derived from the definition of Technology Management is as follows:
Information Technology Management is concerned with exploring and understanding Information Technology as a corporate resource that determines both the strategic and operational capabilities of the firm in designing and developing products and services for maximum customer satisfaction, corporate productivity, profitability and competitiveness.
IT Management is a different subject from Management Information Systems. Management Information Systems refer to information management methods tied to the automation or support of human decision making.IT Management, as stated in the above definition, refers to the IT related management activities in organizations. MIS as it is referred to is focus mainly on the business aspect with a strong input into the technology phase of the business/organization.
Those practicing Information Technology Management are commonly referred to as IT Managers. IT Managers have a lot in common with Project Managers but their main difference is one of focus: IT Managers are responsible and accountable for an ongoing program of IT services while the Project Managers' responsibility and accountability are both limited to a project with a clear start and end date.
List of IT Management disciplines
The below concepts are commonly listed or investigated under the broad term IT Management:
Business/IT alignment
IT Governance
IT Financial Management
IT Service Management
Sourcing
IT configuration management