Thursday, September 23, 2010

The World of Cracks


Our pavements have never been in a worse state of disrepair than they are today. The reason for this are many. It is estimated that 650 potholes open up every minute in North American streets and highways or 341,640,000 per year. The cost of repairing these would require $51,264,000,000 per year. It is also estimated that there are 88, 070,400,000 feet of cracks in North American streets and highways that need to be repaired annually at a cost of$28,182,528,000. To repair the potholes and cracks requires $79,428,528,000. This is an enormous sum which towns, cities, counties, state and provincial governments have to come up with annually. No matter what, these potholes and cracks will be there every year. In addition, the North American motorist incurs substantial annual costs associated with poor roads above normal routine costs of operating a motor vehicle. A fairly recent report stated that in Los Angeles, the additional cost of operating a motor vehicle is $778. The additional annual cost of operating a motor vehicle in other North American cities is somewhat less, but still significant. Poor road conditions contribute substantially to the number of fatalities annually.
Pavements showing various distresses such as cracks, potholes and ruts also contribute significantly to an increase in fuel consumption. A 1985 road information survey concluded that the additional fuel consumption due to failed pavements amounted to $21.3 billion US dollars annually. The average price of gasoline at the time across the United States was $1.15 per gallon. That means the amount of fuel wasted was 18.52 billion gallons. We can safely say that the amount of gasoline wasted today due to failed pavements is significantly higher. Today, we not only have more failed pavements than ever before, we also have many more cars on the road.
Towns, cities, counties, state and provincial governments have a number programs in place in the hope that it will keep their street and highway pavements in a state of good repair. Their programs consist of preventive maintenance which means filling cracks and potholes, overlaying their failed pavements with a lift or two of conventional asphalt concrete, or reconstruction of a failed pavement. To follow these programs requires an enormous amount of money. This amount of money is difficult to come by, especially during this financial crisis. Cities, towns and counties are near bankruptcy. Even formally wealthy states like, California, are forced to terminate 20,000 government jobs and slashing the pay of 200,000 other employees(during fiscal 2009). The credit crunch of(2008) may last for many more years before enough private capital is available to fuel the economy. Most of the capital available today has come from the stimulus packages passed by Congress. What does all this mean? It means most of our pavements will remain in disrepair for the foreseeable future. If funding is restored to pre2008, it will still be insufficient to cope with all our failed pavements.
Our present approach to deal with all the failed pavements requires new thinking by all of us. The first thing we must do is to think out of the box. So let us look at how we are trying to maintain our extensive network of streets and highways. One term that we constantly bandy about is "Preventive Maintenance". What exactly are we trying to prevent? When we fill potholes or cracks, do we really believe that they are repaired permanently? When we overlay an old failed pavement with one or two lifts of new asphalt concrete, do we really believe that cracks and potholes will not reflect through the new overlay? If we do, then we need a drastic re-evaluation of our experience, training and education. Our so called preventive maintenance involves conventional asphalt concrete either dense-graded or gap-graded. Use of conventional asphalt concrete mixes will do nothing to prevent the old failures from recurring in a very short time. Sometimes it may be only a few months or a few years but they will recur. Paving streets, roads and highways requires huge quantities of crushed aggregate which is readily available. When this aggregate is mixed with the proper amount of asphalt cement as determined in the laboratory, the resultant asphalt concrete mix is classified as conventional asphalt concrete. Conventional asphalt concrete is relatively economical compared to other construction materials. This is one reason why it is used so extensively in road construction. The pavement failures we observe is the direct result of using conventional asphalt concrete mixes. We can change the gradation of the aggregate all we want without improving the performance of the pavement significantly. Thus, we can have dense-graded mixes or gap-graded ones and in the long run find very little improvement. It may take a little longer for a gape-graded pavement to crack compared to a dense-grade asphalt concrete mix. However, the resultant cracks will be wider and grow faster because of the predominantly larger particle sizes. There are many products on the market that claim to mitigate reflective cracking. Some work better than others while some do not work at all.
When we mitigate or even eliminate cracking of a new pavement or reflective in overlays, we will at the same time eliminate the formation of most potholes. As a result, we would drastically reduce the annual cost of $79,428,528,000 to fill all the cracks and potholes that appear every year in North American streets, roads and highways. From the foregoing, we can conclude that, it will continue to require enormous funds to just maintain our highway infrastructure.
To crack a demo for all...............
The various state and provincial governments own, on average, less than 15% of the roads and highways. The towns, cities and counties own the rest. It is these jurisdictions which carry most of the burden of pavement maintenance. As a result, it is they that have to come up with the money. Clearly, this is just too much of a financial burden to cope with. The first casualty of any budget are the funds allocated for street and road maintenance which are drastically reduced. Is it any wonder that our streets and roads are in such disrepair.

What is a Denial-Of-Service Attack?

A denial-of-service (DoS) attack attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts, banking, root name servers, or other services that rely on the affected computer.
One common method of attack involves saturating the target machine with communications requests, so that it cannot respond to legitimate traffic, or responds so slowly that it is effectively unavailable.
During normal network communications using TCP/IP, a user contacts a server with a request to display a web page, download a file, or run an application. The user request uses a greeting message called a SYN. The server responds with its own SYN along with an acknowledgment (ACK), that it received from the user in initial request, called a SYN+ACK. The server then waits from a reply or ACK from the user acknowledging that it received the server's SYN. Once the user replies, the communication connection is established and data transfer can begin.
In a DoS attack against a server, the attacker sends a SYN request to the server. The server then responds with a SYN+ACK and waits for a reply. However, the attacker never responds with the final prerequisite ACK needed to complete the connection.
The server continues to "hold the line open" and wait for a response (which is not coming) while at the same time receiving more false requests and keeping more lines open for responses. After a short period, the server runs out of resources and can no longer accept legitimate requests.
A variation of the DoS attack is the distributed denial of service (DDoS) attack. Instead of using one computer, a DDoS may use thousands of remote controlled zombie computers in a botnet to flood the victim with requests. The large number of attackers makes it almost impossible to locate and block the source of the attack. Most DoS attacks are of the distributed type.
An older type of DoS attack is a smurf attack. During a smurf attack, the attacker sends a request to a large number of computers and makes it appear as if the request came from the target server. Each computer responds to the target server, overwhelming it and causes it to crash or become unavailable. Smurf attack can be prevented with a properly configured operating system or router, so such attacks are no longer common.
DoS attacks are not limited to wired networks but can also be used against wireless networks. An attacker can flood the radio frequency (RF) spectrum with enough radiomagnetic interference to prevent a device from communicating effectively with other wireless devices. This attack is rarely seen due to the cost and complexity of the equipment required to flood the RF spectrum.
Some symptoms of a DoS attack include:
  • Unusually slow performance when opening files or accessing web sites
  • Unavailability of a particular web site
  • Inability to access any web site
  • Dramatic increase in the number of spam emails received
  • To prevent DoS attacks administrators can utilize firewalls to deny protocols, ports, or IP addresses. Some switches and routers can be configured to detect and respond to DoS using automatic data traffic rate filtering and balancing. Additionally, application front-end hardware and intrusion prevention systems can analyze data packets as they enter the system, and identify if they are regular or dangerous.

Software Security Development - A White Hat's Perspective

"If you know the enemy and know yourself you need not fear the results of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle." - Sun Tzu[1]
Introduction-
How to know your enemy
Knowing your enemy is vital in fighting him effectively. Security should be learned not just by network defense, but also by using the vulnerability of software and techniques used for malicious intent. As computer attack tools and techniques continue to advance, we will likely see major, life-impacting events in the near future. However, we will create a much more secure world, with risk managed down to an acceptable level. To get there, we have to integrate security into our systems from the start, and conduct thorough security testing throughout the software life cycle of the system. One of the most interesting ways of learning computer security is studying and analyzing from the perspective of the attacker. A hacker or a programming cracker uses various available software applications and tools to analyze and investigate weaknesses in network and software security flaws and exploit them. Exploiting the software is exactly what it sounds like, taking advantage of some bug or flaw and redesigning it to make it work for their advantage.
Similarly, your personal sensitive information could be very useful to criminals. These attackers might be looking for sensitive data to use in identity theft or other fraud, a convenient way to launder money, information useful in their criminal business endeavors, or system access for other nefarious purposes. One of the most important stories of the past couple of years has been the rush of organized crime into the computer attacking business. They make use of business processes to make money in computer attacks. This type of crime can be highly lucrative to those who might steal and sell credit card numbers, commit identity theft, or even extort money from a target under threat of DoS flood. Further, if the attackers cover their tracks carefully, the possibilities of going to jail are far lower for computer crimes than for many types of physical crimes. Finally, by operating from an overseas base, from a country with little or no legal framework regarding computer crime prosecution, attackers can operate with virtual impunity [1].
Current Security
Assessing the vulnerabilities of software is the key to improving the current security within a system or application. Developing such a vulnerability analysis should take into consideration any holes in the software that could carry out a threat. This process should highlight points of weakness and assist in the construction of a framework for subsequent analysis and countermeasures. The security we have in place today including firewalls, counterattack software, IP blockers, network analyzers, virus protection and scanning, encryption, user profiles and password keys. Elaborating the attacks on these basic functionalities for the software and the computer system that hosts it is important to making software and systems stronger.
You may have a task which requires a client-host module which, in many instances, is the starting point from which a system is compromised. Also understanding the framework you're utilizing, which includes the kernel, is imperative for preventing an attack. A stack overflow is a function which is called in a program and accesses the stack to obtain important data such as local variables, arguments for the function, the return address, the order of operations within a structure, and the compiler being used. If you obtain this information you may exploit it to overwrite the input parameters on the stack which is meant to produce a different result. This may be useful to the hacker which wants to obtain any information that may grant them access to a person's account or for something like an SQL injection into your company's database. Another way to get the same effect without knowing the size of the buffer is called a heap overflow which utilizes the dynamically allocated buffers that are meant to be used when the size of the data is not known and reserves memory when allocated.
We already know a little bit about integer overflows (or should at least) and so we Integer overflows are basically variables that are prone to overflows by means of inverting the bits to represent a negative value. Although this sounds good, the integers themselves are dramatically changed which could be beneficial to the attackers needs such as causing a denial of service attack. I'm concerned that if engineers and developers do not check for overflows such as these, it could mean errors resulting in overwriting some part of the memory. This would imply that if anything in memory is accessible it could shut down their entire system and leave it vulnerable later down the road.
Format string vulnerabilities are actually the result of poor attention to code from the programmers who write it. If written with the format parameter such as "%x" then it returns the hexadecimal contents of the stack if the programmer decided to leave the parameters as "printf(string);" or something similar. There are many other testing tools and techniques that are utilized in testing the design of frameworks and applications such as "fuzzing" which can prevent these kinds of exploits by seeing where the holes lie.
In order to exploit these software flaws it implies, in almost any case, supplying bad input to the software so it acts in a certain way which it was not intended or predicted to. Bad input can produce many types of returned data and effects in the software logic which can be reproduced by learning the input flaws. In most cases this involves overwriting original values in memory whether it is data handling or code injection. TCP/IP (transfer control protocol/internet protocol) and any related protocols are incredibly flexible and can be used for all kinds of applications. However, the inherent design of TCP/IP offers many opportunities for attackers to undermine the protocol, causing all sorts of problems with our computer systems. By undermining TCP/IP and other ports, attackers can violate the confidentiality of our sensitive data, alter the data to undermine its integrity, pretend to be other users and systems, and even crash our machines with DoS attacks. Many attackers routinely exploit the vulnerabilities of traditional TCP/IP to gain access to sensitive systems around the globe with malicious intent.
Hackers today have come to understand operating frameworks and security vulnerabilities within the operating structure itself. Windows, Linux and UNIX programming has been openly exploited for their flaws by means of viruses, worms or Trojan attacks. After gaining access to a target machine, attackers want to maintain that access. They use Trojan horses, backdoors, and root-kits to achieve this goal. Just because operating environments may be vulnerable to attacks doesn't mean your system has to be as well. With the new addition of integrated security in operating systems like Windows Vista, or for the open source rule of Linux, you will have no trouble maintaining effective security profiles.
Finally I want discuss what kind of technology were seeing to actually hack the hacker, so to speak. More recently a security professional named Joel Eriksson showcased his application which infiltrates the hackers attack to use against them.
Wired article on the RSA convention with Joel Eriksson:
"Eriksson, a researcher at the Swedish security firm Bitsec, uses reverse-engineering tools to find remotely exploitable security holes in hacking software. In particular, he targets the client-side applications intruders use to control Trojan horses from afar, finding vulnerabilities that would let him upload his own rogue software to intruders' machines." [7]
Hackers, particularly in china, use a program called PCShare to hack their victim's machines and upload's or downloads files. The program Eriksson developed called RAT (remote administration tools) which infiltrates the programs bug which the writers most likely overlooked or didn't think to encrypt. This bug is a module that allows the program to display the download time and upload time for files. The hole was enough for Eriksson to write files under the user's system and even control the server's autostart directory. Not only can this technique be used on PCShare but also a various number of botnet's as well. New software like this is coming out everyday and it will be beneficial for your company to know what kinds will help fight the interceptor.
Mitigation Process and Review
Software engineering practices for quality and integrity include the software security framework patterns that will be used. "Confidentiality, integrity, and availability have overlapping concerns, so when you partition security patterns using these concepts as classification parameters, many patterns fall into the overlapping regions" [3]. Among these security domains there are other areas of high pattern density which includes distributive computing, fault tolerance and management, process and organizational structuring. These subject areas are enough to make a complete course on patterns in software design [3].
We must also focus on the context of the application which is where the pattern is applied and the stakeholders view and protocols that they want to serve. The threat models such as CIA model (confidentiality, integrity and availability) will define the problem domain for the threats and classifications behind the patterns used under the CIA model. Such classifications are defined under the Defense in Depth, Minefield and Grey Hats techniques.
The tabular classification scheme in security patterns, defines the classification based on their domain concepts which fails to account for more of the general patterns which span multiple categories. What they tried to do in classifying patterns was to base the problems on what needs to be solved. They partitioned the security pattern problem space using the threat model in particular to distinguish the scope. A classification process based on threat models is more perceptive because it uses the security problems that patterns solve. An example of these threat models is STRIDE. STRIDE is an acronym containing the following concepts:
Spoofing: An attempt to gain access to a system using a forged identity. A compromised system would give an unauthorized user access to sensitive data.
Tampering: Data corruption during network communication, where the data's integrity is threatened.
Repudiation: A user's refusal to acknowledge participation in a transaction.
Information Disclosure: The unwanted exposure and loss of private data's confidentiality.
Denial of service: An attack on system availability.
Elevation of Privilege: An attempt to raise the privilege level by exploiting some vulnerability, where a resource's confidentiality, integrity, and availability are threatened. [3]
What this threat model covers can be discussed using the following four patterns: Defense in Depth, Minefield, Policy Enforcement Point, and Grey Hats. Despite this all patterns belong to multiple groups one way or another because classifying abstract threats would prove difficult. The IEEE classification in their classification hierarchy is a tree which represents nodes on the basis of domain specific verbatim. Pattern navigation will be easier and more meaningful if you use it in this format. The classification scheme based off of the STRIDE model alone is limited, but only because patterns that address multiple concepts can't be classified using a two-dimensional schema. The hierarchical scheme shows not only the leaf nodes which display the patterns but also multiple threats that affect them. The internal nodes are in the higher base level which will find multiple threats that all the dependent level is affected by. Threat patterns at the tree's root apply to multiple contexts which consist of the core, the perimeter, and the exterior. Patterns that are more basic, such as Defense in Depth, reside at the classification hierarchy's highest level because they apply to all contexts. Using network tools you will be able to find these threat concepts such as spoofing, intrusion tampering, repudiation, DoS, and secure pre-forking, will allow the developer team to pinpoint the areas of security weakness in the areas of core, perimeter and exterior security.
Defense against kernel made root-kits should keep attackers from gaining administrative access in the first place by applying system patches. Tools for Linux, UNIX and Windows look for anomalies introduced on a system by various users and kernel root-kits. But although a perfectly implemented and perfectly installed kernel root-kit can dodge a file integrity checker, reliable scanning tools should be useful because they can find very subtle mistakes made by an attacker that a human might miss. Also Linux software provides useful tools for incident response and forensics. For example some tools returns outputs that you can be trusted more than user and kernel-mode root-kits.
Logs that have been tampered with are less than useless for investigative purposes, and conducting a forensic investigation without logging checks is like cake without the frosting. To harden any system, a high amount of attention will be needed in order to defend a given system's log which will depend on the sensitivity of the server. Computers on the net that contain sensitive data will require a great amount of care to protect. For some systems on an intranet, logging might be less imperative. However, for vitally important systems containing sensitive information about human resources, legality issues, as well as mergers and acquisitions, the logs would make or break protecting your company's confidentiality. Detecting an attack and finding evidence that digital forensics use is vital for building a case against the intruder. So encrypt those logs, the better the encryption, the less likely they will ever be tampered with.
Fuzz Protocols
Protocol Fuzzing is a software testing technique that which automatically generates, then submits, random or sequential data to various areas of an application in an attempt to uncover security vulnerabilities. It is more commonly used to discover security weaknesses in applications and protocols which handle data transport to and from the client and host. The basic idea is to attach the inputs of a program to a source of random or unexpected data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct. These kind of fuzzing techniques were first developed by Professor Barton Miller and his associates [5]. It was intended to change the mentality from being too confident of one's technical knowledge, to actually question the conventional wisdom behind security.
Luiz Edwardo on protocol fuzzing:
"Most of the time, when the perception of security doesn't match the reality of security, it's because the perception of the risk does not match the reality of the risk. We worry about the wrong things: paying too much attention to minor risks and not enough attention to major ones. We don't correctly assess the magnitude of different risks. A lot of this can be chalked up to bad information or bad mathematics, but there are some general pathology that come up over and over again" [6].
With the mainstream of fuzzing, we have seen numerous bugs in a system which has made national or even international news. Attackers have a list of contacts, a handful of IP addresses for your network, and a list of domain names. Using a variety of scanning techniques, the attackers have now gained valuable information about the target network, including a list of phone numbers with modems (more obsolete but still viable), a group of wireless access points, addresses of live hosts, network topology, open ports, and firewall rule sets. The attacker has even gathered a list of vulnerabilities found on your network, all the while trying to evade detection. At this point, the attackers are poised for the kill, ready to take over systems on your network. This growth in fuzzing has shown that delivering the product/service software using basic testing practices are no longer acceptable. Because the internet provides so many protocol breaking tools, it is very likely that an intruder will break your company's protocol on all levels of its structure, semantics and protocol states. So in the end, if you do not fuzz it someone else will. Session based, and even state based, fuzzing practices have been used to establish the connections using the state level of a session to find better fault isolation. But the real challenge behind fuzzing is doing these techniques then isolating the fault environment, the bugs, protocols implementation and the monitoring of the environment.
Systems Integrations
There are three levels of systems integration the developer must consider for security. The software developer must consider the entire mitigation review of the software flaw and base it on the design implementation. This includes access control, intrusion detection and the trade-offs for the implementation. Integrating these controls into the system is important in the implementation stage of development. Attacks on these systems may even lead to severe safety and financial effects. Securing computer systems has become a very important part of system development and deployment.
Since we cannot completely take away the threats, we must minimize their impact instead. This can be made possible by creating an understanding of human and technical issues involved in such attacks. This knowledge can allow an engineer or developer make the intruder's life as hard as possible. This makes the challenge even greater in understanding the attacker's motivations and skill level. Think of it as infiltrating the hackers head by thinking like them psychologically.
Access Control
Even if you have implemented all of the controls you can think of there are a variety of other security lockdowns that must continually be supplemented to constant attacks against a system. You might apply security patches, use a file integrity checking tool, and have adequate logging, but have you recently looked for unsecured modems, or how about activating security on the ports or on the switches in your critical network segments to prevent the latest sniffing attack? Have you considered implementing non-executable stacks to prevent one of the most common types of attacks today, the stack-based buffer overflow? You should always be ready for kernel-level root-kits with any of these other attacks which imply the attacker has the capability of taking you out of command of your system.
Password attacks are very common in exploiting software authorization protocols. Attackers often try to guess passwords for systems to gain access either by hand or through using scripts that are generated. Password cracking will involve taking the encrypted or hashed passwords from a system cache or registry and using an automated tool to determine the original passwords. Password cracking tools create password guesses, encrypt or hash the guesses, and compare the result with the encrypted or hashed password so long as you have the encryption file to compare the results. The password guesses can come from a dictionary scanner, brute force routines, or hybrid techniques. This is why access controls must protect human, physical and intellectual assets against loss, damage or compromise by permitting or denying entrance into, within and from the protected area. The controls will also deny or grant access rights and the time thereof of the protected area. The access controls are operated by human resources using physical and/or electronic hardware in accordance with the policies. To defend against password attacks, you must have a strong password policy that requires users to have nontrivial passwords. You must make users aware of the policy, employ password filtering software, and periodically crack your own users passwords (with appropriate permission from management) to enforce the policy. You might also want to consider authentication tools stronger than passwords, such as PKI authentication, hardware tokens or auditing software [1].
But despite this, another developer might be interested in authenticating only. This user would first create minimal access points where the authenticator pattern will enforce authentication policies. The subject descriptor will define the data used to grant or deny the authentication decision. A password synchronizer pattern performs distributed password management. Authenticator and password synchronizer are not directly related. The users will need to apply other patterns after authenticator before they could use a password synchronizer.
Intrusion Detection
Intrusion detection is used for monitoring and logging the activity of security risks. A functioning network intrusion detection system should indicate that someone has found the doors, but nobody has actually tried to open them yet. This will inspect inbound and outbound network activity and identify patterns used that may indicate a network or system attack from someone attempting to compromise the system. In detecting the misuse of the system the protocols used, such as scanners, analyzes the information it gathers and compares it to large databases of attack signatures it provides. In essence, the security detection looks for a specific attack that has already been documented. Like a virus detection system, the detection system is only as good as the index of attack signatures that it uses to compare packets against. In anomaly detection, the system administrator defines the normal state of the network's traffic breakdown, load, protocols, and typical packet size. Anomaly detection of segments is used to compare their current state to the normal state and look for anomalies. Designing the intrusion detection must also put into account, and detect, malicious packets that are meant to be overlooked by a generic firewall's basic filtering rules. In a host based system, the detection system should examine the activity on each individual computer or host. As long as you are securing the environment and authorizing transactions, then intrusion detection should pick up no activity from a flaw in the system's data flow.
Trade-Offs
Trade-offs of the implementation must also be taken into consideration when developing these controls and detection software. The developer must also consider the severity of the risk, the probability of the risk, the magnitude of the costs, how effective the countermeasure is at mitigating the risk and how well disparate risks and costs can be analyzed at this level, despite the fact that risks analysis was complete, because actual changes must be considered and the security assessment must be reassessed through this process. The one area that can cause the feeling of security to diverge from the reality of security is the idea of risk itself. If we get the severity of the risk wrong, we're going to get the trade-off wrong, which cannot happen at a critical level. We can do this to find out the implications in two ways. First, we can underestimate risks, like the risk of an automobile accident on your way to work. Second, we can overestimate some risks, such as the risk of some guy you know, stalking you or your family. When we overestimate and when we underestimate is governed by a few specific heuristics. One heuristic area is the idea that "bad security trade-offs is probability. If we get the probability wrong, we get the trade-off wrong" [6]. These heuristics are not specific to risk, but contribute to bad evaluations of risk. And as humans, our ability to quickly assess and spit out some probability in our brains runs into all sorts of problems. When we organize ourselves to correctly analyze a security issue, it becomes mere statistics. But when it comes down to it, we still need to figure out the threat of the risk which can be found when "listing five areas where perception can diverge from reality:"
-The severity of the risk.
-The probability of the risk.
-The magnitude of the costs.
-How effective the countermeasure is at mitigating the risk.
-The trade-off itself [6].
To think a system is completely secure is absurd and illogical at best unless hardware security was more widespread. Feeling of the word and reality of security are different, but they're closely related. We try our best security trade-offs considering the perception noted. And what I mean by that is that it gives us genuine security for a reasonable cost and when our actual feeling of security matches the reality of security. It is when the two are out of alignment that we get security wrong. We are also not adept at making coherent security trade-offs, especially in the context of a lot of ancillary information which is designed to persuade us in one direction or another. But when we reach the goal of complete lockdown on security protocol that is when you know the assessment was well worth the effort.
Physical Security
Physical security is any information that may be available, and used in order to gain specific information about company related data which may include documentation, personal information, assets and people susceptible to social engineering.
In its most widely practiced form, social engineering involves an attacker using employees at the target organization on the phone and exploiting them into revealing sensitive information. The most frustrating aspect of social engineering attacks for security professionals is that they are nearly always successful. By pretending to be another employee, a customer, or a supplier, the attacker attempts to manipulate the target person into divulging some of the organization's secrets. Social engineering is deception, pure and simple. The techniques used by social engineers are often associated with computer attacks, most likely because of the fancy term "social engineering" applied to the techniques when used in computer intrusions. However, scam artists, private investigators, law enforcement, and even determined sales people employ virtually the same techniques every single day.
Use public and private organizations to help with staffed security in and around complex parameters also install alarms on all doors, windows, and ceiling ducts. Make a clear statement to employees about assign clear roles and responsibilities for engineers, employees, and people in building maintenance and staff that they must always have authorization before they can disclose any corporate data information. They must make critical contacts and ongoing communication throughout a software product and disclosure of documentation. Mobile resources must be given to employees that travel and there should be installed on their mobile devices the correct security protocols for communicating back and forth from a web connection. The company must utilize local, state, and remote facilities to backup data or utilize services for extra security and protection of data resources. Such extra security could include surveillance of company waste so it is not susceptible to dumpster diving. Not to say an assailant might be looking for your yesterday's lunch but will more likely be looking for shredded paper, other important memo's or company reports you want to keep confidential.
Dumpster diving is a variation on physical break-in that involves rifling through an organization's trash to look for sensitive information. Attackers use dumpster diving to find discarded paper, CDs, DVDs, floppy disks (more obsolete but still viable), tapes, and hard drives containing sensitive data. In the computer underground, dumpster diving is sometimes referred to as trashing, and it can be a smelly affair. In the massive trash receptacle behind your building, an attacker might discover a complete diagram of your network architecture, or an employee might have carelessly tossed out a sticky note with a user ID and password. Although it may seem disgusting in most respects, a good dumpster diver can often retrieve informational gold from an organization's waste [1].
Conclusion
Security development involves the careful consideration of company value and trust. With the world as it exists today, we understand that the response to electronic attacks is not as lenient as they should be but none the less unavoidable. Professional criminals, hired guns, and even insiders, to name just a few of the threats we face today, cannot be compared to the pimply teen hacker sitting at his computer ready to launch his/her newest attacks at your system. Their motivations can include revenge, monetary gain, curiosity, or common pettiness to attract attention or to feel accomplished in some way. Their skill levels range from the simple script kiddies using tools that they do not understand, to elite masters who know the technology better than their victims and possibly even the vendors themselves.
The media, in retrospect, has made it a distinct point that the threat of digital terrorism is in the golden age of computer hacking. As we load more of our lives and society onto networked computers, attacks have become more prevalent and damaging. But do not get discouraged by the number and power of computer tools that harm your system, as we also live in the golden age of information security. The defenses implemented and maintained are definitely what you need. Although they are often not easy, they do add a good deal of job security for effective system administrators, network managers, and security personnel. Computer attackers are excellent in sharing and disclosing information with each other about how to attack your specified infrastructure. Their efficiency on information distribution concerning infiltrating their victims can be ruthless and brutal. Implementing and maintaining a comprehensive security program is not trivial. But do not get discouraged, we live in very exciting times, with technologies advancing rapidly, offering great opportunities for learning and growing.
If the technology itself is not exciting enough, just think of the great job security given to system administrators, security analyzers, and network managers who has knowledgeable experience on how to secure their systems properly. But also keep in mind that by staying diligent, you really can defend your information and systems while having a challenging and exciting job to give you more experience. To keep up with the attackers and defend our systems, we must understand their techniques. We must help system administrators, security personnel, and network administrators defend their computer systems against attack. Attackers come from all walks of life and have a variety of motivations and skill levels. Make sure you accurately assess the threat against your organization and deploy defenses that match the threat and the value of the assets you must protect. And we have to all be aware that we should never underestimate the power of the hacker who has enough time, patience and knowhow to accomplish anything they put their minds to. So it is your duty to do the same.
[1] Skoudis, Ed. Counter Hack Reloaded, Second Edition: a Step-by-Step Guide to Computer Attacks and Effective Defenses. 2nd ed. Massachussetts: Pearson Education, Inc., 2005. Safari Books Online. 12 Dec. 2007

Tuesday, September 21, 2010

Basic FAQs About Computer Hacking

Computer hacking and identity theft works side by side nowadays. The accessibility and flexibility of the internet has allowed computer hackers to access varied personal data online which are then sold to identity thieves. This has been an on-going business that profits both the hacker and the identity thief at the expense of the victim.
Who are more prone to computer hacking?
The computer systems of small business are the most vulnerable to identity theft. These small business typically do not have large-scale security systems that can protect their database and client information. Computer hackers can easily access customer credit card information and employee payroll files as these data are typically unguarded. Often, these small businesses do not have access logs which keeps track of the date, time and person who accessed there sensitive information. Without this, they will not be able to know if their database or payroll information have been stolen and if it was, these small businesses will have no idea at all.
How does computer hacking take place?
Hacking attacks can be performed in a couple of ways:
1. Hacking computer that have their firewalls disabled or not installed. Hackers also crash on wireless networks that do not have router firewalls enabled or installed.
2. Sending out email attachments that contain keystroke loggers or other malicious software that embeds itself unknowingly on its victims' computers. These programs record every keystroke done by the victims in their computer and send it out when the victims goes online.
3. Attacking individual consumers who use old versions of browsers. These old browsers have certain vulnerability that is being improved with every new editions of said browser. If you use an older browser, chances are, hackers can easily enter your computer because of the browser that you use.
4. Exploiting unsecured wireless networks or secured wireless networks that have very weak or poor password protections. Hackers can easily get inside a wireless network and view what everyone in the network is viewing in their screen. When you enter your personal information, a hacker on the other end might be recording it to be used for their identity theft activities.
5. Previous employees or trusted users who access their company's computer using their insider knowledge to get inside. These people are often disgruntled when they leave the company so they seek to get even by hacking into the system.
What can i do about computer hacking?
There are a couple of steps that you can do to evade computer hackers and potential identity theft. Here are some tips:
1. Ensure that all the computers in your home or in your office are using the latest firewalls and have anti-virus programs installed on their computers. These programs should be updated as well, or else they will not serve their purpose.
2. Use the latest browser or if you've gone fond of your new one, make sure that you update the patches for your browser.
3. Using your anti-virus program and anti-spyware software, scan your computer regularly for any potential malwares.
4. Be wary about the websites that you open. Do not click on just anything and avoid downloading everything that you see as "free download."

Blocking Intruders Using Network Intrusion Prevention

We all have heard of electric fences that are basically used to promote safety and deter intruders from entering our territory. It's no different for our computer networks. They also require effective protection through an electric fence called network intrusion prevention. A network intrusion prevention system or IPS is generally referred to as an active security measure because it is capable of blocking malicious traffic by interfering in the data flow. In network security, the IPS represents the next generation intrusion detection system. It inherits the thorough detection capabilities of an IPS and the blocking abilities of a firewall device to perform intrusion prevention.
How a Network Intrusion Prevention Device Works
A network intrusion prevention system thoroughly analyzes every network data packet that passes through the network. This way, an IPS keeps a check on the traffic and also recognizes patterns of data. An IPS instantly acts whenever an unauthorized user carries out an attack on the network. It identifies the attack and denies access to that user leaving his/her attempt of intruding in the network futile. An IPS also plays an important role in shifting the traffic flow through the network and ensures that there is no interruption in the way of crucial files. For instance, financial transactions can be prioritized over normal web surfing by using an IPS.
Network Intrusion Prevention and Zero-Day Threat Prevention
An IPS deploys a database of 'generic attack behaviors' that is intended to block unknown attacks apart from a signature database that contains known attack patterns. This functionality is referred to as zero-day threat prevention. A zero-day threat is a type of malicious code and is powerful enough to mislead even antivirus and anti-spyware software. You may deploy this functionality to your network but it may block legitimate traffic by falsely identifying it as an attack. This is not the case with an Intrusion Detection System (IDS). The idea is to configure your IPS device to work like an IDS so that it can collect traffic and enable the administrator to recognize any false positive flows. These flows can be excluded from the inspection engine once the system is configured to act as IPS.

Network Security

Understanding Network Security Threats - What is network security and how to learn network security technology is the main thing we are talking about today.
To understand, in part, why we are where we are today, you only have to remember that PC is the acronym for personal computer. The PC was born and, for many years, evolved as the tool of the individual. In fact, much of the early interest and growth came as a rebellion to what appeared as exclusionary attitudes and many restrictions of early data-processing departments. Admittedly, many PCs were tethered to company networks, but even then there was often considerable flexibility in software selection, settings preferences, and even sharing of resources such as folders and printers.
As a result, a huge industry of producers developed and sold devices, software, and services targeted at meeting user interests and needs, often with little or no thought about security. Prior to the Internet, a person could keep their computer resources safe simply by being careful about shared floppy disks.
Today, even the PCs of most individuals routinely connect to the largest network in the world (the Internet) to expand the user's reach and abilities. As the computing world grew, and skills and technology proliferated, people with less than honorable intentions discovered new and more powerful ways to apply their craft. Just as a gun makes a robber a greater threat, computers give the scam artist, terrorist, thief, or pervert the opportunity to reach out and hurt others in greater numbers and from longer distances.