Thursday, September 23, 2010

The World of Cracks


Our pavements have never been in a worse state of disrepair than they are today. The reason for this are many. It is estimated that 650 potholes open up every minute in North American streets and highways or 341,640,000 per year. The cost of repairing these would require $51,264,000,000 per year. It is also estimated that there are 88, 070,400,000 feet of cracks in North American streets and highways that need to be repaired annually at a cost of$28,182,528,000. To repair the potholes and cracks requires $79,428,528,000. This is an enormous sum which towns, cities, counties, state and provincial governments have to come up with annually. No matter what, these potholes and cracks will be there every year. In addition, the North American motorist incurs substantial annual costs associated with poor roads above normal routine costs of operating a motor vehicle. A fairly recent report stated that in Los Angeles, the additional cost of operating a motor vehicle is $778. The additional annual cost of operating a motor vehicle in other North American cities is somewhat less, but still significant. Poor road conditions contribute substantially to the number of fatalities annually.
Pavements showing various distresses such as cracks, potholes and ruts also contribute significantly to an increase in fuel consumption. A 1985 road information survey concluded that the additional fuel consumption due to failed pavements amounted to $21.3 billion US dollars annually. The average price of gasoline at the time across the United States was $1.15 per gallon. That means the amount of fuel wasted was 18.52 billion gallons. We can safely say that the amount of gasoline wasted today due to failed pavements is significantly higher. Today, we not only have more failed pavements than ever before, we also have many more cars on the road.
Towns, cities, counties, state and provincial governments have a number programs in place in the hope that it will keep their street and highway pavements in a state of good repair. Their programs consist of preventive maintenance which means filling cracks and potholes, overlaying their failed pavements with a lift or two of conventional asphalt concrete, or reconstruction of a failed pavement. To follow these programs requires an enormous amount of money. This amount of money is difficult to come by, especially during this financial crisis. Cities, towns and counties are near bankruptcy. Even formally wealthy states like, California, are forced to terminate 20,000 government jobs and slashing the pay of 200,000 other employees(during fiscal 2009). The credit crunch of(2008) may last for many more years before enough private capital is available to fuel the economy. Most of the capital available today has come from the stimulus packages passed by Congress. What does all this mean? It means most of our pavements will remain in disrepair for the foreseeable future. If funding is restored to pre2008, it will still be insufficient to cope with all our failed pavements.
Our present approach to deal with all the failed pavements requires new thinking by all of us. The first thing we must do is to think out of the box. So let us look at how we are trying to maintain our extensive network of streets and highways. One term that we constantly bandy about is "Preventive Maintenance". What exactly are we trying to prevent? When we fill potholes or cracks, do we really believe that they are repaired permanently? When we overlay an old failed pavement with one or two lifts of new asphalt concrete, do we really believe that cracks and potholes will not reflect through the new overlay? If we do, then we need a drastic re-evaluation of our experience, training and education. Our so called preventive maintenance involves conventional asphalt concrete either dense-graded or gap-graded. Use of conventional asphalt concrete mixes will do nothing to prevent the old failures from recurring in a very short time. Sometimes it may be only a few months or a few years but they will recur. Paving streets, roads and highways requires huge quantities of crushed aggregate which is readily available. When this aggregate is mixed with the proper amount of asphalt cement as determined in the laboratory, the resultant asphalt concrete mix is classified as conventional asphalt concrete. Conventional asphalt concrete is relatively economical compared to other construction materials. This is one reason why it is used so extensively in road construction. The pavement failures we observe is the direct result of using conventional asphalt concrete mixes. We can change the gradation of the aggregate all we want without improving the performance of the pavement significantly. Thus, we can have dense-graded mixes or gap-graded ones and in the long run find very little improvement. It may take a little longer for a gape-graded pavement to crack compared to a dense-grade asphalt concrete mix. However, the resultant cracks will be wider and grow faster because of the predominantly larger particle sizes. There are many products on the market that claim to mitigate reflective cracking. Some work better than others while some do not work at all.
When we mitigate or even eliminate cracking of a new pavement or reflective in overlays, we will at the same time eliminate the formation of most potholes. As a result, we would drastically reduce the annual cost of $79,428,528,000 to fill all the cracks and potholes that appear every year in North American streets, roads and highways. From the foregoing, we can conclude that, it will continue to require enormous funds to just maintain our highway infrastructure.
To crack a demo for all...............
video
The various state and provincial governments own, on average, less than 15% of the roads and highways. The towns, cities and counties own the rest. It is these jurisdictions which carry most of the burden of pavement maintenance. As a result, it is they that have to come up with the money. Clearly, this is just too much of a financial burden to cope with. The first casualty of any budget are the funds allocated for street and road maintenance which are drastically reduced. Is it any wonder that our streets and roads are in such disrepair.

What is a Denial-Of-Service Attack?

A denial-of-service (DoS) attack attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts, banking, root name servers, or other services that rely on the affected computer.
One common method of attack involves saturating the target machine with communications requests, so that it cannot respond to legitimate traffic, or responds so slowly that it is effectively unavailable.
During normal network communications using TCP/IP, a user contacts a server with a request to display a web page, download a file, or run an application. The user request uses a greeting message called a SYN. The server responds with its own SYN along with an acknowledgment (ACK), that it received from the user in initial request, called a SYN+ACK. The server then waits from a reply or ACK from the user acknowledging that it received the server's SYN. Once the user replies, the communication connection is established and data transfer can begin.
In a DoS attack against a server, the attacker sends a SYN request to the server. The server then responds with a SYN+ACK and waits for a reply. However, the attacker never responds with the final prerequisite ACK needed to complete the connection.
The server continues to "hold the line open" and wait for a response (which is not coming) while at the same time receiving more false requests and keeping more lines open for responses. After a short period, the server runs out of resources and can no longer accept legitimate requests.
A variation of the DoS attack is the distributed denial of service (DDoS) attack. Instead of using one computer, a DDoS may use thousands of remote controlled zombie computers in a botnet to flood the victim with requests. The large number of attackers makes it almost impossible to locate and block the source of the attack. Most DoS attacks are of the distributed type.
An older type of DoS attack is a smurf attack. During a smurf attack, the attacker sends a request to a large number of computers and makes it appear as if the request came from the target server. Each computer responds to the target server, overwhelming it and causes it to crash or become unavailable. Smurf attack can be prevented with a properly configured operating system or router, so such attacks are no longer common.
DoS attacks are not limited to wired networks but can also be used against wireless networks. An attacker can flood the radio frequency (RF) spectrum with enough radiomagnetic interference to prevent a device from communicating effectively with other wireless devices. This attack is rarely seen due to the cost and complexity of the equipment required to flood the RF spectrum.
Some symptoms of a DoS attack include:
  • Unusually slow performance when opening files or accessing web sites
  • Unavailability of a particular web site
  • Inability to access any web site
  • Dramatic increase in the number of spam emails received
  • To prevent DoS attacks administrators can utilize firewalls to deny protocols, ports, or IP addresses. Some switches and routers can be configured to detect and respond to DoS using automatic data traffic rate filtering and balancing. Additionally, application front-end hardware and intrusion prevention systems can analyze data packets as they enter the system, and identify if they are regular or dangerous.
video

Software Security Development - A White Hat's Perspective

"If you know the enemy and know yourself you need not fear the results of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle." - Sun Tzu[1]
Introduction-
How to know your enemy
Knowing your enemy is vital in fighting him effectively. Security should be learned not just by network defense, but also by using the vulnerability of software and techniques used for malicious intent. As computer attack tools and techniques continue to advance, we will likely see major, life-impacting events in the near future. However, we will create a much more secure world, with risk managed down to an acceptable level. To get there, we have to integrate security into our systems from the start, and conduct thorough security testing throughout the software life cycle of the system. One of the most interesting ways of learning computer security is studying and analyzing from the perspective of the attacker. A hacker or a programming cracker uses various available software applications and tools to analyze and investigate weaknesses in network and software security flaws and exploit them. Exploiting the software is exactly what it sounds like, taking advantage of some bug or flaw and redesigning it to make it work for their advantage.
Similarly, your personal sensitive information could be very useful to criminals. These attackers might be looking for sensitive data to use in identity theft or other fraud, a convenient way to launder money, information useful in their criminal business endeavors, or system access for other nefarious purposes. One of the most important stories of the past couple of years has been the rush of organized crime into the computer attacking business. They make use of business processes to make money in computer attacks. This type of crime can be highly lucrative to those who might steal and sell credit card numbers, commit identity theft, or even extort money from a target under threat of DoS flood. Further, if the attackers cover their tracks carefully, the possibilities of going to jail are far lower for computer crimes than for many types of physical crimes. Finally, by operating from an overseas base, from a country with little or no legal framework regarding computer crime prosecution, attackers can operate with virtual impunity [1].
Current Security
Assessing the vulnerabilities of software is the key to improving the current security within a system or application. Developing such a vulnerability analysis should take into consideration any holes in the software that could carry out a threat. This process should highlight points of weakness and assist in the construction of a framework for subsequent analysis and countermeasures. The security we have in place today including firewalls, counterattack software, IP blockers, network analyzers, virus protection and scanning, encryption, user profiles and password keys. Elaborating the attacks on these basic functionalities for the software and the computer system that hosts it is important to making software and systems stronger.
You may have a task which requires a client-host module which, in many instances, is the starting point from which a system is compromised. Also understanding the framework you're utilizing, which includes the kernel, is imperative for preventing an attack. A stack overflow is a function which is called in a program and accesses the stack to obtain important data such as local variables, arguments for the function, the return address, the order of operations within a structure, and the compiler being used. If you obtain this information you may exploit it to overwrite the input parameters on the stack which is meant to produce a different result. This may be useful to the hacker which wants to obtain any information that may grant them access to a person's account or for something like an SQL injection into your company's database. Another way to get the same effect without knowing the size of the buffer is called a heap overflow which utilizes the dynamically allocated buffers that are meant to be used when the size of the data is not known and reserves memory when allocated.
We already know a little bit about integer overflows (or should at least) and so we Integer overflows are basically variables that are prone to overflows by means of inverting the bits to represent a negative value. Although this sounds good, the integers themselves are dramatically changed which could be beneficial to the attackers needs such as causing a denial of service attack. I'm concerned that if engineers and developers do not check for overflows such as these, it could mean errors resulting in overwriting some part of the memory. This would imply that if anything in memory is accessible it could shut down their entire system and leave it vulnerable later down the road.
Format string vulnerabilities are actually the result of poor attention to code from the programmers who write it. If written with the format parameter such as "%x" then it returns the hexadecimal contents of the stack if the programmer decided to leave the parameters as "printf(string);" or something similar. There are many other testing tools and techniques that are utilized in testing the design of frameworks and applications such as "fuzzing" which can prevent these kinds of exploits by seeing where the holes lie.
In order to exploit these software flaws it implies, in almost any case, supplying bad input to the software so it acts in a certain way which it was not intended or predicted to. Bad input can produce many types of returned data and effects in the software logic which can be reproduced by learning the input flaws. In most cases this involves overwriting original values in memory whether it is data handling or code injection. TCP/IP (transfer control protocol/internet protocol) and any related protocols are incredibly flexible and can be used for all kinds of applications. However, the inherent design of TCP/IP offers many opportunities for attackers to undermine the protocol, causing all sorts of problems with our computer systems. By undermining TCP/IP and other ports, attackers can violate the confidentiality of our sensitive data, alter the data to undermine its integrity, pretend to be other users and systems, and even crash our machines with DoS attacks. Many attackers routinely exploit the vulnerabilities of traditional TCP/IP to gain access to sensitive systems around the globe with malicious intent.
Hackers today have come to understand operating frameworks and security vulnerabilities within the operating structure itself. Windows, Linux and UNIX programming has been openly exploited for their flaws by means of viruses, worms or Trojan attacks. After gaining access to a target machine, attackers want to maintain that access. They use Trojan horses, backdoors, and root-kits to achieve this goal. Just because operating environments may be vulnerable to attacks doesn't mean your system has to be as well. With the new addition of integrated security in operating systems like Windows Vista, or for the open source rule of Linux, you will have no trouble maintaining effective security profiles.
Finally I want discuss what kind of technology were seeing to actually hack the hacker, so to speak. More recently a security professional named Joel Eriksson showcased his application which infiltrates the hackers attack to use against them.
Wired article on the RSA convention with Joel Eriksson:
"Eriksson, a researcher at the Swedish security firm Bitsec, uses reverse-engineering tools to find remotely exploitable security holes in hacking software. In particular, he targets the client-side applications intruders use to control Trojan horses from afar, finding vulnerabilities that would let him upload his own rogue software to intruders' machines." [7]
Hackers, particularly in china, use a program called PCShare to hack their victim's machines and upload's or downloads files. The program Eriksson developed called RAT (remote administration tools) which infiltrates the programs bug which the writers most likely overlooked or didn't think to encrypt. This bug is a module that allows the program to display the download time and upload time for files. The hole was enough for Eriksson to write files under the user's system and even control the server's autostart directory. Not only can this technique be used on PCShare but also a various number of botnet's as well. New software like this is coming out everyday and it will be beneficial for your company to know what kinds will help fight the interceptor.
Mitigation Process and Review
Software engineering practices for quality and integrity include the software security framework patterns that will be used. "Confidentiality, integrity, and availability have overlapping concerns, so when you partition security patterns using these concepts as classification parameters, many patterns fall into the overlapping regions" [3]. Among these security domains there are other areas of high pattern density which includes distributive computing, fault tolerance and management, process and organizational structuring. These subject areas are enough to make a complete course on patterns in software design [3].
We must also focus on the context of the application which is where the pattern is applied and the stakeholders view and protocols that they want to serve. The threat models such as CIA model (confidentiality, integrity and availability) will define the problem domain for the threats and classifications behind the patterns used under the CIA model. Such classifications are defined under the Defense in Depth, Minefield and Grey Hats techniques.
The tabular classification scheme in security patterns, defines the classification based on their domain concepts which fails to account for more of the general patterns which span multiple categories. What they tried to do in classifying patterns was to base the problems on what needs to be solved. They partitioned the security pattern problem space using the threat model in particular to distinguish the scope. A classification process based on threat models is more perceptive because it uses the security problems that patterns solve. An example of these threat models is STRIDE. STRIDE is an acronym containing the following concepts:
Spoofing: An attempt to gain access to a system using a forged identity. A compromised system would give an unauthorized user access to sensitive data.
Tampering: Data corruption during network communication, where the data's integrity is threatened.
Repudiation: A user's refusal to acknowledge participation in a transaction.
Information Disclosure: The unwanted exposure and loss of private data's confidentiality.
Denial of service: An attack on system availability.
Elevation of Privilege: An attempt to raise the privilege level by exploiting some vulnerability, where a resource's confidentiality, integrity, and availability are threatened. [3]
What this threat model covers can be discussed using the following four patterns: Defense in Depth, Minefield, Policy Enforcement Point, and Grey Hats. Despite this all patterns belong to multiple groups one way or another because classifying abstract threats would prove difficult. The IEEE classification in their classification hierarchy is a tree which represents nodes on the basis of domain specific verbatim. Pattern navigation will be easier and more meaningful if you use it in this format. The classification scheme based off of the STRIDE model alone is limited, but only because patterns that address multiple concepts can't be classified using a two-dimensional schema. The hierarchical scheme shows not only the leaf nodes which display the patterns but also multiple threats that affect them. The internal nodes are in the higher base level which will find multiple threats that all the dependent level is affected by. Threat patterns at the tree's root apply to multiple contexts which consist of the core, the perimeter, and the exterior. Patterns that are more basic, such as Defense in Depth, reside at the classification hierarchy's highest level because they apply to all contexts. Using network tools you will be able to find these threat concepts such as spoofing, intrusion tampering, repudiation, DoS, and secure pre-forking, will allow the developer team to pinpoint the areas of security weakness in the areas of core, perimeter and exterior security.
Defense against kernel made root-kits should keep attackers from gaining administrative access in the first place by applying system patches. Tools for Linux, UNIX and Windows look for anomalies introduced on a system by various users and kernel root-kits. But although a perfectly implemented and perfectly installed kernel root-kit can dodge a file integrity checker, reliable scanning tools should be useful because they can find very subtle mistakes made by an attacker that a human might miss. Also Linux software provides useful tools for incident response and forensics. For example some tools returns outputs that you can be trusted more than user and kernel-mode root-kits.
Logs that have been tampered with are less than useless for investigative purposes, and conducting a forensic investigation without logging checks is like cake without the frosting. To harden any system, a high amount of attention will be needed in order to defend a given system's log which will depend on the sensitivity of the server. Computers on the net that contain sensitive data will require a great amount of care to protect. For some systems on an intranet, logging might be less imperative. However, for vitally important systems containing sensitive information about human resources, legality issues, as well as mergers and acquisitions, the logs would make or break protecting your company's confidentiality. Detecting an attack and finding evidence that digital forensics use is vital for building a case against the intruder. So encrypt those logs, the better the encryption, the less likely they will ever be tampered with.
Fuzz Protocols
Protocol Fuzzing is a software testing technique that which automatically generates, then submits, random or sequential data to various areas of an application in an attempt to uncover security vulnerabilities. It is more commonly used to discover security weaknesses in applications and protocols which handle data transport to and from the client and host. The basic idea is to attach the inputs of a program to a source of random or unexpected data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct. These kind of fuzzing techniques were first developed by Professor Barton Miller and his associates [5]. It was intended to change the mentality from being too confident of one's technical knowledge, to actually question the conventional wisdom behind security.
Luiz Edwardo on protocol fuzzing:
"Most of the time, when the perception of security doesn't match the reality of security, it's because the perception of the risk does not match the reality of the risk. We worry about the wrong things: paying too much attention to minor risks and not enough attention to major ones. We don't correctly assess the magnitude of different risks. A lot of this can be chalked up to bad information or bad mathematics, but there are some general pathology that come up over and over again" [6].
With the mainstream of fuzzing, we have seen numerous bugs in a system which has made national or even international news. Attackers have a list of contacts, a handful of IP addresses for your network, and a list of domain names. Using a variety of scanning techniques, the attackers have now gained valuable information about the target network, including a list of phone numbers with modems (more obsolete but still viable), a group of wireless access points, addresses of live hosts, network topology, open ports, and firewall rule sets. The attacker has even gathered a list of vulnerabilities found on your network, all the while trying to evade detection. At this point, the attackers are poised for the kill, ready to take over systems on your network. This growth in fuzzing has shown that delivering the product/service software using basic testing practices are no longer acceptable. Because the internet provides so many protocol breaking tools, it is very likely that an intruder will break your company's protocol on all levels of its structure, semantics and protocol states. So in the end, if you do not fuzz it someone else will. Session based, and even state based, fuzzing practices have been used to establish the connections using the state level of a session to find better fault isolation. But the real challenge behind fuzzing is doing these techniques then isolating the fault environment, the bugs, protocols implementation and the monitoring of the environment.
Systems Integrations
There are three levels of systems integration the developer must consider for security. The software developer must consider the entire mitigation review of the software flaw and base it on the design implementation. This includes access control, intrusion detection and the trade-offs for the implementation. Integrating these controls into the system is important in the implementation stage of development. Attacks on these systems may even lead to severe safety and financial effects. Securing computer systems has become a very important part of system development and deployment.
Since we cannot completely take away the threats, we must minimize their impact instead. This can be made possible by creating an understanding of human and technical issues involved in such attacks. This knowledge can allow an engineer or developer make the intruder's life as hard as possible. This makes the challenge even greater in understanding the attacker's motivations and skill level. Think of it as infiltrating the hackers head by thinking like them psychologically.
Access Control
Even if you have implemented all of the controls you can think of there are a variety of other security lockdowns that must continually be supplemented to constant attacks against a system. You might apply security patches, use a file integrity checking tool, and have adequate logging, but have you recently looked for unsecured modems, or how about activating security on the ports or on the switches in your critical network segments to prevent the latest sniffing attack? Have you considered implementing non-executable stacks to prevent one of the most common types of attacks today, the stack-based buffer overflow? You should always be ready for kernel-level root-kits with any of these other attacks which imply the attacker has the capability of taking you out of command of your system.
Password attacks are very common in exploiting software authorization protocols. Attackers often try to guess passwords for systems to gain access either by hand or through using scripts that are generated. Password cracking will involve taking the encrypted or hashed passwords from a system cache or registry and using an automated tool to determine the original passwords. Password cracking tools create password guesses, encrypt or hash the guesses, and compare the result with the encrypted or hashed password so long as you have the encryption file to compare the results. The password guesses can come from a dictionary scanner, brute force routines, or hybrid techniques. This is why access controls must protect human, physical and intellectual assets against loss, damage or compromise by permitting or denying entrance into, within and from the protected area. The controls will also deny or grant access rights and the time thereof of the protected area. The access controls are operated by human resources using physical and/or electronic hardware in accordance with the policies. To defend against password attacks, you must have a strong password policy that requires users to have nontrivial passwords. You must make users aware of the policy, employ password filtering software, and periodically crack your own users passwords (with appropriate permission from management) to enforce the policy. You might also want to consider authentication tools stronger than passwords, such as PKI authentication, hardware tokens or auditing software [1].
But despite this, another developer might be interested in authenticating only. This user would first create minimal access points where the authenticator pattern will enforce authentication policies. The subject descriptor will define the data used to grant or deny the authentication decision. A password synchronizer pattern performs distributed password management. Authenticator and password synchronizer are not directly related. The users will need to apply other patterns after authenticator before they could use a password synchronizer.
Intrusion Detection
Intrusion detection is used for monitoring and logging the activity of security risks. A functioning network intrusion detection system should indicate that someone has found the doors, but nobody has actually tried to open them yet. This will inspect inbound and outbound network activity and identify patterns used that may indicate a network or system attack from someone attempting to compromise the system. In detecting the misuse of the system the protocols used, such as scanners, analyzes the information it gathers and compares it to large databases of attack signatures it provides. In essence, the security detection looks for a specific attack that has already been documented. Like a virus detection system, the detection system is only as good as the index of attack signatures that it uses to compare packets against. In anomaly detection, the system administrator defines the normal state of the network's traffic breakdown, load, protocols, and typical packet size. Anomaly detection of segments is used to compare their current state to the normal state and look for anomalies. Designing the intrusion detection must also put into account, and detect, malicious packets that are meant to be overlooked by a generic firewall's basic filtering rules. In a host based system, the detection system should examine the activity on each individual computer or host. As long as you are securing the environment and authorizing transactions, then intrusion detection should pick up no activity from a flaw in the system's data flow.
Trade-Offs
Trade-offs of the implementation must also be taken into consideration when developing these controls and detection software. The developer must also consider the severity of the risk, the probability of the risk, the magnitude of the costs, how effective the countermeasure is at mitigating the risk and how well disparate risks and costs can be analyzed at this level, despite the fact that risks analysis was complete, because actual changes must be considered and the security assessment must be reassessed through this process. The one area that can cause the feeling of security to diverge from the reality of security is the idea of risk itself. If we get the severity of the risk wrong, we're going to get the trade-off wrong, which cannot happen at a critical level. We can do this to find out the implications in two ways. First, we can underestimate risks, like the risk of an automobile accident on your way to work. Second, we can overestimate some risks, such as the risk of some guy you know, stalking you or your family. When we overestimate and when we underestimate is governed by a few specific heuristics. One heuristic area is the idea that "bad security trade-offs is probability. If we get the probability wrong, we get the trade-off wrong" [6]. These heuristics are not specific to risk, but contribute to bad evaluations of risk. And as humans, our ability to quickly assess and spit out some probability in our brains runs into all sorts of problems. When we organize ourselves to correctly analyze a security issue, it becomes mere statistics. But when it comes down to it, we still need to figure out the threat of the risk which can be found when "listing five areas where perception can diverge from reality:"
-The severity of the risk.
-The probability of the risk.
-The magnitude of the costs.
-How effective the countermeasure is at mitigating the risk.
-The trade-off itself [6].
To think a system is completely secure is absurd and illogical at best unless hardware security was more widespread. Feeling of the word and reality of security are different, but they're closely related. We try our best security trade-offs considering the perception noted. And what I mean by that is that it gives us genuine security for a reasonable cost and when our actual feeling of security matches the reality of security. It is when the two are out of alignment that we get security wrong. We are also not adept at making coherent security trade-offs, especially in the context of a lot of ancillary information which is designed to persuade us in one direction or another. But when we reach the goal of complete lockdown on security protocol that is when you know the assessment was well worth the effort.
Physical Security
Physical security is any information that may be available, and used in order to gain specific information about company related data which may include documentation, personal information, assets and people susceptible to social engineering.
In its most widely practiced form, social engineering involves an attacker using employees at the target organization on the phone and exploiting them into revealing sensitive information. The most frustrating aspect of social engineering attacks for security professionals is that they are nearly always successful. By pretending to be another employee, a customer, or a supplier, the attacker attempts to manipulate the target person into divulging some of the organization's secrets. Social engineering is deception, pure and simple. The techniques used by social engineers are often associated with computer attacks, most likely because of the fancy term "social engineering" applied to the techniques when used in computer intrusions. However, scam artists, private investigators, law enforcement, and even determined sales people employ virtually the same techniques every single day.
Use public and private organizations to help with staffed security in and around complex parameters also install alarms on all doors, windows, and ceiling ducts. Make a clear statement to employees about assign clear roles and responsibilities for engineers, employees, and people in building maintenance and staff that they must always have authorization before they can disclose any corporate data information. They must make critical contacts and ongoing communication throughout a software product and disclosure of documentation. Mobile resources must be given to employees that travel and there should be installed on their mobile devices the correct security protocols for communicating back and forth from a web connection. The company must utilize local, state, and remote facilities to backup data or utilize services for extra security and protection of data resources. Such extra security could include surveillance of company waste so it is not susceptible to dumpster diving. Not to say an assailant might be looking for your yesterday's lunch but will more likely be looking for shredded paper, other important memo's or company reports you want to keep confidential.
Dumpster diving is a variation on physical break-in that involves rifling through an organization's trash to look for sensitive information. Attackers use dumpster diving to find discarded paper, CDs, DVDs, floppy disks (more obsolete but still viable), tapes, and hard drives containing sensitive data. In the computer underground, dumpster diving is sometimes referred to as trashing, and it can be a smelly affair. In the massive trash receptacle behind your building, an attacker might discover a complete diagram of your network architecture, or an employee might have carelessly tossed out a sticky note with a user ID and password. Although it may seem disgusting in most respects, a good dumpster diver can often retrieve informational gold from an organization's waste [1].
Conclusion
Security development involves the careful consideration of company value and trust. With the world as it exists today, we understand that the response to electronic attacks is not as lenient as they should be but none the less unavoidable. Professional criminals, hired guns, and even insiders, to name just a few of the threats we face today, cannot be compared to the pimply teen hacker sitting at his computer ready to launch his/her newest attacks at your system. Their motivations can include revenge, monetary gain, curiosity, or common pettiness to attract attention or to feel accomplished in some way. Their skill levels range from the simple script kiddies using tools that they do not understand, to elite masters who know the technology better than their victims and possibly even the vendors themselves.
The media, in retrospect, has made it a distinct point that the threat of digital terrorism is in the golden age of computer hacking. As we load more of our lives and society onto networked computers, attacks have become more prevalent and damaging. But do not get discouraged by the number and power of computer tools that harm your system, as we also live in the golden age of information security. The defenses implemented and maintained are definitely what you need. Although they are often not easy, they do add a good deal of job security for effective system administrators, network managers, and security personnel. Computer attackers are excellent in sharing and disclosing information with each other about how to attack your specified infrastructure. Their efficiency on information distribution concerning infiltrating their victims can be ruthless and brutal. Implementing and maintaining a comprehensive security program is not trivial. But do not get discouraged, we live in very exciting times, with technologies advancing rapidly, offering great opportunities for learning and growing.
If the technology itself is not exciting enough, just think of the great job security given to system administrators, security analyzers, and network managers who has knowledgeable experience on how to secure their systems properly. But also keep in mind that by staying diligent, you really can defend your information and systems while having a challenging and exciting job to give you more experience. To keep up with the attackers and defend our systems, we must understand their techniques. We must help system administrators, security personnel, and network administrators defend their computer systems against attack. Attackers come from all walks of life and have a variety of motivations and skill levels. Make sure you accurately assess the threat against your organization and deploy defenses that match the threat and the value of the assets you must protect. And we have to all be aware that we should never underestimate the power of the hacker who has enough time, patience and knowhow to accomplish anything they put their minds to. So it is your duty to do the same.
[1] Skoudis, Ed. Counter Hack Reloaded, Second Edition: a Step-by-Step Guide to Computer Attacks and Effective Defenses. 2nd ed. Massachussetts: Pearson Education, Inc., 2005. Safari Books Online. 12 Dec. 2007

Tuesday, September 21, 2010

Basic FAQs About Computer Hacking

Computer hacking and identity theft works side by side nowadays. The accessibility and flexibility of the internet has allowed computer hackers to access varied personal data online which are then sold to identity thieves. This has been an on-going business that profits both the hacker and the identity thief at the expense of the victim.
Who are more prone to computer hacking?
The computer systems of small business are the most vulnerable to identity theft. These small business typically do not have large-scale security systems that can protect their database and client information. Computer hackers can easily access customer credit card information and employee payroll files as these data are typically unguarded. Often, these small businesses do not have access logs which keeps track of the date, time and person who accessed there sensitive information. Without this, they will not be able to know if their database or payroll information have been stolen and if it was, these small businesses will have no idea at all.
How does computer hacking take place?
Hacking attacks can be performed in a couple of ways:
1. Hacking computer that have their firewalls disabled or not installed. Hackers also crash on wireless networks that do not have router firewalls enabled or installed.
2. Sending out email attachments that contain keystroke loggers or other malicious software that embeds itself unknowingly on its victims' computers. These programs record every keystroke done by the victims in their computer and send it out when the victims goes online.
3. Attacking individual consumers who use old versions of browsers. These old browsers have certain vulnerability that is being improved with every new editions of said browser. If you use an older browser, chances are, hackers can easily enter your computer because of the browser that you use.
4. Exploiting unsecured wireless networks or secured wireless networks that have very weak or poor password protections. Hackers can easily get inside a wireless network and view what everyone in the network is viewing in their screen. When you enter your personal information, a hacker on the other end might be recording it to be used for their identity theft activities.
5. Previous employees or trusted users who access their company's computer using their insider knowledge to get inside. These people are often disgruntled when they leave the company so they seek to get even by hacking into the system.
What can i do about computer hacking?
There are a couple of steps that you can do to evade computer hackers and potential identity theft. Here are some tips:
1. Ensure that all the computers in your home or in your office are using the latest firewalls and have anti-virus programs installed on their computers. These programs should be updated as well, or else they will not serve their purpose.
2. Use the latest browser or if you've gone fond of your new one, make sure that you update the patches for your browser.
3. Using your anti-virus program and anti-spyware software, scan your computer regularly for any potential malwares.
4. Be wary about the websites that you open. Do not click on just anything and avoid downloading everything that you see as "free download."

Blocking Intruders Using Network Intrusion Prevention

We all have heard of electric fences that are basically used to promote safety and deter intruders from entering our territory. It's no different for our computer networks. They also require effective protection through an electric fence called network intrusion prevention. A network intrusion prevention system or IPS is generally referred to as an active security measure because it is capable of blocking malicious traffic by interfering in the data flow. In network security, the IPS represents the next generation intrusion detection system. It inherits the thorough detection capabilities of an IPS and the blocking abilities of a firewall device to perform intrusion prevention.
How a Network Intrusion Prevention Device Works
A network intrusion prevention system thoroughly analyzes every network data packet that passes through the network. This way, an IPS keeps a check on the traffic and also recognizes patterns of data. An IPS instantly acts whenever an unauthorized user carries out an attack on the network. It identifies the attack and denies access to that user leaving his/her attempt of intruding in the network futile. An IPS also plays an important role in shifting the traffic flow through the network and ensures that there is no interruption in the way of crucial files. For instance, financial transactions can be prioritized over normal web surfing by using an IPS.
Network Intrusion Prevention and Zero-Day Threat Prevention
An IPS deploys a database of 'generic attack behaviors' that is intended to block unknown attacks apart from a signature database that contains known attack patterns. This functionality is referred to as zero-day threat prevention. A zero-day threat is a type of malicious code and is powerful enough to mislead even antivirus and anti-spyware software. You may deploy this functionality to your network but it may block legitimate traffic by falsely identifying it as an attack. This is not the case with an Intrusion Detection System (IDS). The idea is to configure your IPS device to work like an IDS so that it can collect traffic and enable the administrator to recognize any false positive flows. These flows can be excluded from the inspection engine once the system is configured to act as IPS.

Network Security

Understanding Network Security Threats - What is network security and how to learn network security technology is the main thing we are talking about today.
To understand, in part, why we are where we are today, you only have to remember that PC is the acronym for personal computer. The PC was born and, for many years, evolved as the tool of the individual. In fact, much of the early interest and growth came as a rebellion to what appeared as exclusionary attitudes and many restrictions of early data-processing departments. Admittedly, many PCs were tethered to company networks, but even then there was often considerable flexibility in software selection, settings preferences, and even sharing of resources such as folders and printers.
As a result, a huge industry of producers developed and sold devices, software, and services targeted at meeting user interests and needs, often with little or no thought about security. Prior to the Internet, a person could keep their computer resources safe simply by being careful about shared floppy disks.
Today, even the PCs of most individuals routinely connect to the largest network in the world (the Internet) to expand the user's reach and abilities. As the computing world grew, and skills and technology proliferated, people with less than honorable intentions discovered new and more powerful ways to apply their craft. Just as a gun makes a robber a greater threat, computers give the scam artist, terrorist, thief, or pervert the opportunity to reach out and hurt others in greater numbers and from longer distances.

Sunday, September 19, 2010

Certified Ethical Hacker Training Skills

There are many things that are possible at a click these days, thanks to the services of the internet. But along with this, there are also a number of risks and crimes related to the internet that have now increased and one of the most serious ones is hacking.
Hacking refers to entering computer systems of companies in order to get information. This is illegal and dangerous to the company and all the information that is kept with the company records. In order to prevent such an intrusion, companies are now appointing professionals who are experts in preventing hacking and these professionals are called white hat hackers or ethical hackers. This is why hacker training is becoming so popular.
Significance of programming in developing certified ethical hacker training skills
The main functionality expected out of a person who is undergoing ethical hacker training is to check the information system of an organization to test if there are any flaws in the system and also to check if there are any viruses. The certified professional is also expected to find solutions and make necessary changes so that penetration into the system is not possible by any unauthorized person. In order to be a successful ethical hacker the person taking the ethical hacker training needs to have some skills.
One of the most important skills is the knowledge of programming skills. A person who aims at getting certified ethical hacker training needs to have knowledge of programming languages like Java, C++, Perl, Python and Lisp. If you just starting afresh, then it is a good option to learn Python first because it is easy to learn and less complicated compared to the other programs. After you learn this program you can go on to learn Lisp, Perl, Java and C.
Other required skills
Apart from the above mentioned skills there are some other skills that need to be picked up as a part of honing imparted certified ethical hacker training skills. One of them is learning and understanding UNIX. This is very important because it is the very basis of the internet and without learning this operating system, rewriting and modifications are not possible. The best way of learning it is by practicing on the Linux or UNIX that is on your own computer. The next skill is to learn HTML which is very important in ethical hacker training.
If you need to have hacker training the knowledge of how to write HTML programs is a necessity. In order to understand all this and to make the ethical hacking training effective it is necessary to be fluent in English too. This is because all the resources that are available in this regard are accessible in English and if you are not fluent in the language then no amount of training can make you an expert. Also you need to develop a habit of reading and gathering information from the net because the more informed you are the better. There are many institutes that train people but a lot of work and research needs to be done on your own.
With the increase in the number of cyber crimes there is an increase in demand for professionals who are involved in the security of computers. Certified ethical hacker training is simply unavoidable.

Looking For Hacker Training?

Hacking or penetrating into the information system in order to gather details of the organization is rampant these days and this calls for help to protect the system. Entering or sneaking into a system to check for any faults in the system is termed as ethical hacking.
This is done by professionals who are employed by companies to check if the system can be penetrated into and also to devise ways of preventing such activities. This is why hacker training is very popular these days and there are many workshops that are held for certified ethical hacker training. Shopping on the net is the best option if you are looking for hacker training.
What to look for in ethical hacker training?
The hacker training that is imparted by professionals during the workshops train people to think and act differently. Hacking is a term that is synonymous with computers, but that is not the only system that can be hacked. People can hack into telephones, mobiles and other similar networking systems. In order to get ethical hacker training you can also research on tools that are available on the net. Some of the tools that are really good are not available free of cost on the net, but you should have some of them. They are Snoopers, Compilers, Hex file editors and APIs.
Along with these basic tools, in order to get ethical hacking training, you should also garner techniques that help you in scripting, formatting and editing of the disk and accordingly help you to disassemble. Programming is a basic requirement in ethical hacking training. There is a lot of programming that is involved in the process of hacking and therefore you should be familiar with programming languages. Another requirement is the familiarity of Windows, UNIX, Linux and other operating systems.
Successful certified ethical hacker training
Apart from the above mentioned requirements there are some general requirements in order to make the training a successful one. The first and most important requirement is learning in a group. There needs to be a lot of discussion and exchange of ideas when you are getting training for such a certification. The next important thing is for you to get involved in some live projects.
Hands -on experience is always better than what you read and understand from the books. It is always important to start a project from scratch and build on it so that you understand every minute detail of the working. The next very important step is to make complete use of the internet. This is the place where you can get all the information you would ever need and you also need to learn every required aspect of the net like making Boolean searches.
Every time you stumble upon a good site you need to book mark it so that you can visit it later when you need it. These days there are many institutions that provide training for hacking and if you are looking for hacker training then you need to surf the net to find all the required information.

Keyboard Activity Monitoring Tool

Professional Keyboard Activity Monitoring Tool is a hidden inspection utility that keeps track of all activities performed on your system in your absence. Best stealth keylogger software keeps eye on each action performed by family members, children, students, employees or workers etc when they use your computer. Invisible key logging software establishes full control over your computer system as software secretly stores keystrokes into an encrypted log file so that administrator could know external users activities including visited web site, system login-name/password, emails, windows clip board entries etc in his absence.
Keyboard Activity Monitoring Tool
Keyboard surveillance tool runs completely in invisible mode and no one except system owner is aware of software installation. Award winning keyboard logger utility overrides all major anti keylogger or spy tools as application is not visible either in the ‘start menu’, ‘system startup’, ‘add remove program list’ etc or even cannot be detected by antivirus softwares. Easy to use key catcher software helps you to watch out what your co-workers are doing online, students are doing at lab, schools or at home, friends are doing what etc. You can easily customize keyboard activity tracking report in the exact manner you want and prefer.

ARP, MAC, Poisoning, & WiFi

In this paper we will cover the basics on Address Resolution Protocol (ARP), Media Access Control Addresses (MAC), Wireless (WiFi), and layer 2 communications. I hope to explain how a “Man in the Middle Attack” works. The common name for this is ARP poisoning, MAC poisoning, or Spoofing. Before we can get into how the poisoning works we need to learn about how the OSI model works and what happens at layer 2 of the OSI Model. To keep this basic we will only scratch the surface on the OSI model to get the idea of how protocols work and communicate with each other.
The OSI (open
Systems interconnection) model was developed by the International Standards
Organization (ISO) in 1984 in an attempt to provide some standard to the way
networking should work. It is a theoretical layered model in which the notion of
networking is divided into several layers, each of which defines specific functions and/or
features. However this model is only general guidelines for developing usable network
interfaces and protocols. Sometimes it may become very difficult to distinguish between
each layer as some vendors do not adhere to the model completely. Despite all this the
OSI model has earned the honor of being "the model" upon which all good network
protocols are based.
The OSI Model
The OSI Model is based upon 7 layers (Application layer, Presentation Layer, Session
Layer, Transport Layer, Network Layer, Data Link Layer and the Physical layer). For our
proposes we will review layer 2 (data link layer), Data Link layer defines the format of
data on the network. A network data frame, aka packet, includes checksum, source and
destination address, and data. The data link layer handles the physical and logical
connections to the packet's destination, using a network interface. A host connected to an
Ethernet network would have an Ethernet interface (NIC) to handle connections to the
outside world, and a loop back interface to send packets to itself.
Ethernet addressing
uses a unique, 48-bit address called its Ethernet address or Media Access Control (MAC)
address. MAC addresses are usually represented as six colon-separated pairs of hex
digits, e.g., 8A:0B:20:11:AC:85. This number is unique and is associated with a
particular Ethernet device. The data link layer's protocol-specific header specifies the
MAC address of the packet's source and destination. When a packet is sent to all hosts
(broadcast), a special MAC address (ff:ff:ff:ff:ff:ff) is used. Now with this concept
covered we need to explain what APR is and how is corresponds to the MAC address.
The Address Resolution Protocol is used to dynamically discover the mapping between a
layer 3 (protocol) and a layer 2 (hardware) address. ARP is used to dynamically build and
maintain a mapping database between link local layer 2 addresses and layer 3 addresses.
In the common case this table is for mapping Ethernet to IP addresses. This database is
called the ARP Table. The ARP Table is the true source when it comes to routing traffic
on a Switch (layer 2 device).
ARP Table
Now that we have explored MAC addresses and APR Tables we need to talk about
poisoning. APR Poisoning; also referred to as ARP poison routing (APR), ARP cache
poisoning, & spoofing. A method of attacking an Ethernet LAN by updating the target
computer’s ARP cache/table with both a forged ARP request and reply packets in an
effort to change the Layer 2 Ethernet MAC address (i.e., the address of the network card)
to one that the attacker can monitor.
The Attack
Because the ARP replies have been forged, the target computer sends frames that were
meant for the original destination to the attacker’s computer first so the frames can be
read. A successful APR attempt is invisible to the user. Since the end user never sees the
ARP poisoning they will surf online like normal while the attacker is collecting data from
the session. The data collected can be passwords to e-mail, banking accounts, or
websites. This kind of attack is also known as “Man in the Middle Attack”. This kind of
attack basically works like this: attackers PC sends poisoned ARP request to the gateway
device (router), The gateway device now thinks the route to any PC on the subnet needs
to go though the attackers PC. All hosts on the subnet thinks the attackers IP/MAC is the
gateway and they send all traffic though that computer and the attacking PC forwards the
data to the gateway. So what you end up having is one PC (attacker) sees all traffic on the
network. If this attach is aimed at one user the Attack can just spoof the victims MAC to
his own and only affect
that MAC on the subnet. Keep in mind that the gateway (router)
is designed to have lager routing tables and many sessions connected to it at once. Most
PC’s can not handle too many routes and sessions so the attackers PC has to be a fast PC
(this depends on the volume of traffic on the subnet) to keep up with the flow of data. In
some cases a network can crash or freeze if the attacker’s PC is unable to route the data
effectively. The network Crashes because the number packets dropping due to the fact the
Attackers PC is unable to keep up with the flow of data.
Wardriving Anyone?
Now a lot of people think there safe because there home network is inside there house.
Well this is not true you first should always have a firewall on any internet connection.
An attacker can just as easy spoof the ISP’s devices (Cable modem or DLS router) to get
all your out bound data. If you are using wireless remember to setup encryption or you
have just invited Attackers into you home with no firewall to block them. I have drove in
many cities with my wireless card on seeing over 60% of all AP’s open with no security.
There is a sport called Wardriving witch involves driving in your car with a wireless
network card to find wireless networks. Most Wardrivers do not get onto the networks
they find but they do document them (normally with GPS). The idea behind Wardriving
is just to see how many AP’s you can find and this sport has caught on big in the US. It
would be very easy to get an IP on a Wireless network and then ARP Poison the subnet.
This can be done in less than 2 minutes on an open wireless access point. Once the
attacker is on your subnet they can start receiving all your data so if you buy anything
online the attacker now has you credit card info. There are ways to prevent this kind of
attack but most switches are vulnerable to this kind of attack. To prevent ARP Poisoning
you need a Switch that supports security features and most vendors’ equipment can
handle this but theses kinds of switch devices normally cost more money. Keep in mind
that there are many free tools on the internet that perform ARP Poisoning/Spoofing. It is
not hard to use the tools and with more and more home users going wireless the risk of an
attacker getting you data keeps rising. The best thing to do for protection is to understand
the basics of your network and if you want wireless make sure you have WEP enabled.
The Good Guys
So far we have covered how attackers use APR Poisoning to intercept user’s data but
there are also good reasons to ARP Poison a network. Most network engineers need to
sniff the protocols on a network to make sure the data is flowing correct. The problem
with sniffing on a switch network is that you can only see data bound to your interface
and broadcast traffic. On unmanageable switches there is no way to see all host traffic to
inspect it. With ARP Poisoning you can now divert all traffic to pass though the sniffers
interface and see all data on the network and analyze the traffic for possible issues.
Admins & Engineers maybe trouble shooting speed issues on a subnet and need to see all
the traffic. Once you spoof the subnet to sniff the traffic you will be able to see if viruses
or a bad NIC card is causing a broadcast storm on the subnet. With any tool there are
always good and bad uses and the thing to remember is be careful of what you do online
line because anyone could be monitoring you. If you have any question about poisoning
feel free to send me an e-mail at enjamoripavan@gmail.com.

Why You Need to Know About 192.168.1.1 IP Address?

192.168.1.1 represents a private IP address which is by and large used by routers as their default IP address. As this is used as default by most of the manufacturers, there is a good chance of conflicts. This is due to the fact that multiple network devices like routers originating from the same manufacturer are connected to matching network. This IP address falls within certain ranges of addresses which can be used as the private Internet.

How to Avoid IP address conflicts? The problem of IP address conflicts is best avoided by a good router in service. The router is embedded with a feature like DHCP client server that eases the IP assignment task which takes place without human intervention. So each of the computer entertain themselves with an unique address which eliminates the possibility of any other computer to have the same address in that private network thereby avoiding any clashes.

Usability of 192.168.1.1 which routers can use. There is absolutely no restriction on the type of routers which can use this private IPv4 address. Any modem, computer and internet devices can be organized using 192.168.1.1 IP address. But it is not recommended to go for it because of address divergences which can later show up in connectivity problems.

In order to access the IP 192.168.1.1 address, the method is widespread which begins with typing of the same in your browser's address box. With the press of go can load you with options of change of settings. For this your browser should support the web standard requirements. You can set up a username and password by configuring your device.

There is a chance of your computer to encounter with another system of similar configuration. In this case you are highly advised to create a complete backup of your computer settings before subjecting to correction. This should be done with supreme care as you may end up in losing your Internet connection if your settings appear to be erroneous in the course of configuring your network system.

Other Default Private IP addresses. Other than the IP 192.168.1.1 address, there are addresses like 192.168.2.1 and 192.168.0.1 which are also meant for private use. The 192.168.0.1 is very regular among those consumers who situate their routers for small areas. This IP address is similarly non-routable like 192.168.1.1. One cannot really locate these types of addresses outside the private network.

This IP 192.168.0.1 address is found as a default in case of D-Link router meant for its built-in webpage access. From this place you can access the modem's settings by default and play with WEP, address filtering, MAC, LAN network settings and so on.

Any attempt from the external source to remap your settings when your default IP address is privatized, is best protected by your router settings. And these router settings may not have prevented this unwanted access of external attempts if it was not a private IP address.

Please, read more about 192.168.0.1 and 192.168.1.1 IP addresses

Saturday, September 18, 2010

Internal Threats to Your Network

Internal Threat Landscape

In today's world, more and more customer data is being found on servers, desktops and laptops which contain critical information that can promote a company's growth or destroy the company in an instant. Furthermore, the risk extends beyond the private sector to the public sector and anyone in their homes receiving services from one of these infrastructure entities.

A study performed by Promisec, Inc., a company that regularly conducts comprehensive security audits across a number of industries - including finance, healthcare, insurance, manufacturing, etc., found that:

Use of unauthorized removable storage continues to rise in organizations.

The number of endpoints that do not apply threat management agents or are not updated with the latest build or signatures continues to rise.
Instances of unauthorized instant messaging continue to increase in all organizations.

The study also discovered that -

12% of infected computers had a missing or disabled anti-virus program.
10.7% had unauthorized personal storage like USB sticks or external hard drives.
9.1% had unauthorized peer-to-peer (P2P) applications installed.
8.5% had a missing 3rd party desktop agent.
2.6% had unprotected shared folders.
2.2% had unauthorized remote control software.
2% had missing Microsoft service packs.

Without application awareness, both perimeter and defensive island systems were easily defeated. For example, SQL Slammer was able to enter organizations quickly because:

Firewalls and anti-virus solutions that rely on signatures didn't view the traffic as a threat.

Often, SQL Slammer bypassed perimeter defenses and entered at the network edge through laptops and mobile devices whose traffic never traversed the firewall.

Like firewalls, without a signature to identify it, anti-virus software and most HIDS did not recognize it as a threat.

SQL Slammer was memory resident. Most anti-virus software completely missed it because their scanning engines are often focused on detecting exploits written to disk drives.

Within minutes of an initial SQL Slammer infection, nearly all vulnerable computers on the inside of the network were compromised. Depending on the number of infected devices, this often resulted in massive denial of service on the internal LAN. Furthermore, newer types of attacks are designed not to make "noise" in order to stay undetected.

Product Substitute Availability

Firewalls are a necessary security control for policy enforcement at any network trust boundary, but changing business and threat conditions are putting pressure on growth in the firewall market. Enterprises are redesigning their demilitarized zones (DMZs) to react to the business realities of how staff and customers connect, which drives firewall demand up. However, the increasing requirement for network defense against more-complex threats has increased the deployment of network intrusion prevention, and driven vendors to provide products that support complex deployments and rule sets that mix traditional port/protocol firewall defense with deep-packet inspection intrusion prevention.

At one point in time, Cisco had the best firewall on the market. As the years passed, competitors of all sizes were vying for Cisco's market share. Vendors, such as Juniper, Checkpoint, McAfee and others, have challenged and even taken market share from Cisco. In the Gartner's 2008 magic quadrant, only two vendors are residing in the upper right hand "leaders" quadrant - Juniper and Checkpoint.

In the latest Gartner report, dated 12 October 2009, large enterprises will be replacing stateful firewalls with the Next Generation firewalls during the natural lifecycle replacement. And there are very few vendors that have upgraded their respective product lines to reflect the new attack vectors. Gartner believes that the changing threat conditions and changing business and IT processes will drive network security managers to look for NGFW capabilities at their next firewall/IPS refresh cycle. The key to successful market penetration by NGFW vendors will be to demonstrate first-generation firewall and IPS features that match current first-generation capabilities while including NGFW capabilities at the same or only slightly higher price points.

Default Router IP Addresses

The widespread penetration of the Internet has caused routers to be present and seen almost everywhere. Though differing in their fine points, overall they have the same features. Reading the router's manual is the only correct way to know the exact features of your router but there are some points that are common to all routers.

You want to set up the router IP addresses as instructed in the router manual. Generally, you'll need to use one out of the range of reserved IP addresses. The most common IP address for home routers is: 192.168.2.1.

Typically, a home network router has two IP addresses:
• for internal home (known as LAN)
• for the external Internet (known as WAN) connection.

Now the point is how can you find the router IP addresses? Most commonly, the LAN IP or the internal IP is set to a default value. The lists of the internal IP addresses used by the most popular router companies are:
• Linksys routers use 192.168.1.1
• Netgear and D-Linkrouters typically use 192.168.0.1
• Some of the routers of the US Robotics use 192.168.123.254
• Some SMC ones use 192.168.2.1.

Whatever is the brand of the router, the documentation should mention clearly its default internal IP address. This can, however, be changed by the administrators during router setup. The private LAN address does however remain fixed once it has been set to a certain value. This value can easily be viewed from the router's administrative set up console.

The WAN address or the external IP of the router is set while connecting to the Internet service provider. This address can also be viewed from the administrative console. This IP can even be seen by visiting the web based IP address look up service on the home LAN.

For some home broadband routers or modems, the IP address 192.168.1.254 is a default. Some of these companies which have the IP address 192.168.1.254 as the default one are:
• Billion ADSL routers
• some 3Com OfficeConnect routers
• Linksys SRW2024 managed switches
• Netopia / Cayman Internet gateways
• Westell modems for Bellsouth DSL Internet service in the U.S.

This address is set by the manufacturer at the factory, but you can change it at any time using the vendor's management software.

This is a private network address of IPv4 range and any device on the local network can be set up to use it. Also, like any IP address, only a single device on that network should use this address at any given time in order to avoid address conflicts.

192.168.1.100 marks the beginning of the default dynamic IP range for the Linksys home network routers. Thus for the first device you connect to a Linksys router, the DHCP will usually assign the IP address 192.168.1.100. The DHCP range of the router can be changed through the configuration utility it offers to either use or not to use this particular IP (i.e. 192.168.1.100). It is a private IP address and if it's also a part of the DHCP router's address range, this should not be assigned statically to a local device in order to avoid conflicts.

To find out more information about default router IP addresses, please read more about 192.168.1.254 and 192.168.1.100 IP addresses.

Why Use a NAT Router?

Every computer connected to the Internet is exposed to dangers. For myself and many others the benefits of using the Internet far exceed the possible dangers. We can minimize the dangers if we follow some basic security principles both for our home and office computers.

Let's start with how we connect to the Internet using a wired connection. Many home users and small businesses connect to the Internet through a Cable/DSL modem. This type of connection is an always on connection. As long as our computers are powered on we are connected to the Internet and exposed to dangers. We increase the danger if we connect our PC directly to the Cable/DSL modem. Computers connected in this way will receive a DHCP public IP address from their Internet Service Provider. What this means is that our PC is both visible and accessible directly from the Internet. This exposes us to Internet scanning, worms, and hackers. If we don't have a software firewall installed then our PC can be easily compromised and our data stolen.

Even though a software firewall can lessen the dangers we are exposed to when we connect in this way, I don't recommend this method. A better solution would be to use a Cable/DSL NAT router. The NAT router would connect directly to the Cable/DSL modem and then our computer or computers would connect to the NAT router. Why is this safer?

One of the key benefits of NAT (Network Address Translation) routers is that the router hides the internal IP address of your computer or computers. The Internet sees you as a single machine with a single IP address. This effectively masks the fact that one or many computers on the LAN side of the router may be sharing that one IP address. This not only provides security benefits but also financial ones. NAT enables you to have more than one computer on your home or office network while you only have to pay for one public IP address from your ISP.

How does NAT work? When you turn on your computer you will receive an RFC1918 private IP address from your router. Usually with most Cable/DSL routers this will be on an 192.168.x.x subnet. This internal private IP will have to be changed or NATTED to a public address in order for you to be able to access the Internet. Since all computers on the LAN side of the router will share the same single IP address, the router keeps track of these outbound connections through PAT (Port Address Translation). Here is what happens. When you make an outbound call to Google, the NAT router receives this request and changes your private IP of (192.168.1.20 for example) to a public IP address say (12.46.115.225) and a port number of 2500 making it (12.46.115.225:2500). A second computer on the same LAN with an IP of (192.168.1.21) also makes an outbound request at the same time. This computer will be assigned the same public IP but a different port number say 2501 making it (12.46.115.225:2501). The NAT router keeps track of these connections in a table. It uses this table to match return connections to the correct computer on the private LAN side of the router.

This is the really good part and why the router provides added security. All traffic arriving at the NAT router that does not exactly match the traffic in the router's table is discarded as unwanted traffic. This basically stops all unwanted inbound traffic originating from Internet scanning, worms, and hackers, protecting our computers on the private LAN side of the router from unwanted traffic from the Internet. So if you don't already have a NAT router why not get one. The added security benefits are certainly worth the added expense.

Of course for a NAT router to provide its full benefits it has to be configured correctly. I will discuss this as well as the following subjects in future articles: how to secure wireless networks, how to make a server available to Internet users through port forwarding safely, what is a DMZ and what are its benefits, and how can adding a second NAT router provide even greater security. Please feel free to contact me with any questions or comments

Router

If you've been brought up in the 21st century then you probably take a lot of things for granted that 30 years ago people just didn't have. One of those things is the Internet and its ability to be able to connect people from all over the world and allow them to interact with each other in a variety of ways including sending email, visiting web sites, joining forums, attending online chats and countless other things. But none of this would be possible if it weren't for a device that most people have never seen and probably don't even know exist, called a router.

Routers are pieces of equipment that send messages from everyone connected to the network along thousands of different pathways. We're going to take a behind the scenes look at exactly how these routers work.

Let's say you're sending an email to a friend of yours who is living across country or even in another part of the world. How does the email know to end up on your friend's computer instead of all the other millions of computers all over the world? A good part of the work to get these messages from one computer to another is handled by routers. Rather than pass messages within networks, routers pass messages from one network to another.

To get an idea of how this works, let's take a very simple example.

Let's say you have two departments. Department A with 5 employees and Department B with 5 employees. Let's say that Employee 1 from Department A wants to send an email to Employee 3 at Department B. Each department is part of its own network of computers. A router links the two networks together so that they can communicate with each other. It is the only piece of equipment that sees both networks. Many people ask, why not just make one network? The simple answer is that if the two departments do two completely different jobs for the company and send massive amounts of info within the department, you don't want to slow down the other department with the one department's info. To ease what they call the "traffic burden" the two departments are separated into two networks with a router put between them to connect them just in case they do want to communicate for some reason.

The way the router knows what to send where is with what is called a configuration table. These configuration table consists of info on which connections lead to which addresses, priorities for each connection, and rules for how to handle the passing of info between networks. The router then has two basic jobs. The main task is to make sure that information doesn't go where it's not needed so that the volume of data doesn't clog up the network and the next task is to make sure the information goes to where it's supposed to go.

To simplify how this happens, the router looks at the destination address of each packet sent from the source location. It checks its table to see where this address is and sends each packet to that address, bypassing all the other addresses in the network so as not to slow the network down.

Friday, September 17, 2010

Computer Hacking

Computer hacking is defined as any act of accessing a computer or computer network without the owner's permission. In some cases, hacking requires breaching firewalls or password protections to gain access. In other cases, an individual may hack into a computer that has few or no defenses. Even if there are no defenses to "break" through, simply gaining access to a computer and its information qualifies as criminal computer hacking.

The Intent to Hack

To be convicted of computer hacking, it must be proven that the defendant knowingly gained access to a computer with the intent of breaching without permission. Sometimes individuals, particularly young computer-savvy teenagers, break in to a computer or network just to prove that they can. They may brag about their accomplishment afterward, using the stunt to flaunt their computer abilities. Even though there may not have been an intent to steal or defraud from the hacked system, the defendant can still be criminally charged.

Criminal Charges

When an individual is arrested in Florida for hacking, he or she will be charged with a felony. If the defendant accessed a computer system without authorization but did not intend to steal or defraud, he or she will be charged with a third degree felony. If, however, the hacker broke into the system and planned to defraud the owner of money or information, he or she will be charged with a second degree felony. Past computer hacking offenses have included attempts to steal credit card information, social security numbers, or sensitive company or government information.

Penalties for Hacking

Computer hacking is considered a major threat to company integrity, government confidentiality, and personal security. It is therefore prosecuted aggressively in a court of law. Under Florida law, a third degree felony for hacking can result in a maximum 5 year prison sentence and up to $5,000 in fines. For a hacking offense that involves theft or fraudulent activity, the defendant could be penalized with up to 15 years in prison and a $10,000 fine.

Beyond the immediate court ordered penalties, a hacking offense can destroy an individual's personal and professional reputation. He or she may experience trouble applying to colleges, obtaining scholarships, finding a job, or obtaining a loan. Even many years after your conviction, you could still be negatively affected by your felony computer hacking charge.
For more information you can post your Que...

How to Detect If You've Been Hacked

Everybody knows about hacking and its threat to Internet users, but the question is, would anyone even know that he's been hacked? The answer is no. When an individual hijacks a computer, the trick is to do so without letting the owner know. Otherwise, emergency safety software and other measures will be used, thus, making hacking no longer possible at that particular time. The trick is, therefore, to hack secretly so the hacking can continue for an amount of time significant to the hacker as he advances his selfish ends.

What makes things worse is the fact that many homes and business these days have opened up to wireless technology for convenience. Most of them do not know that this also makes it very convenient for cyber criminals to hatch their evil plots. With these wireless networks, hackers can simply be a few feet away and be able to enter the network and bully every computer in it. In fact, this type of hacking is now so rampant that one can find public sites selling known open wireless networks for hackers to target next.

Once a computer has been hacked, it will be known as a "zombie." A zombie computer would now be serving the hacker in a number of ways from sending spam emails to contaminating other computers with viruses. Some owners are even unaware that their computers have been used to spread pornographic materials or hack government systems. By this time, the computer is fully available to the hacker for control. Hackers even protect themselves from each other by healing security holes in zombie computers so that no other hacker can break in.

Why It Is Important To Use Data Eraser Program

For many people, doing a clean hard disk erase is a waste of their time. They would think that if they would do a thorough hard disk erase, it would take too much effort and become too complicated that it could be a frustrating endeavor. They think that it would actually not be able to provide them with enough benefits anyway, so they might as well just not do it.

Actually, continuing to work without doing a clean data eraser program could lead to various problems. Simply pressing the delete button or overwriting files can actually lead to a variety of issues that may cripple the use of your computer. Worse, not doing a clean hard drive erase could even lead to a security breach that could prove to be difficult to clean up.

Here are some of the reasons as to why it would be important to do a clean hard drive erase:

To avoid snoopers from seeing your files:

By doing a clean hard disk erase, you would make it extra difficult for snoopers or spies to check the file that you have in your computer. By doing a clean hard drive erase, it would make it difficult even for advanced data recovery software

This is especially relevant for people who are working with highly valuable and confidential information for their businesses or companies. Of course, even home users would be able to benefit from this, as you would be able to eliminate your browsing history, credit card numbers, online banking information, and other data that you would not want others to see if they would use good data recovery software.

Registry Cleaner Trial

Registry cleaner trial periods, usually free trial periods, are offered to people contemplating purchasing a program to clean their registries. Of course, free is always good even if the trial period is only long enough for you to see if you really need a registry cleaner or not.

Basically, there are two different types of free trials. In this article, we will explain these two different types of trials and see how one can be very helpful to someone who has not yet decided to buy a registry cleaner, and how the other type may not end up being such a good deal after all.

One type of free cleaner trial is touted as a free registry clean and repair program. With this type of offer what you will get is a registry repair program that will scan your Windows operating system for registry corruption and even clean some, or possibly all of it for free. The problem with these so-called free programs is they are usually inferior products.

Another problem is, after a while, the free period runs out and you will have to pay for a cleaning even if the period runs out while the cleaner is in the process of cleaning your registry. Why this can happen is the duration of the free trial is actually a certain number of incidences of corruption that this cleaner will clean for free. Often times, the amount of corruption that will be cleaned will not even be enough to constitute one good cleaning.

While you are deciding whether or not to buy this registry cleaner it will constantly throw pop-ups at you reminding you your trial period has ended and now you must pay. The amount you will be asked to pay will usually be more than what you would pay for a top notch cleaner.

On top of this fact, stopping these annoying pop ups can be very difficult. It can be done, but it usually takes a special download to do it. In other words, when you are offered a free registry cleaner that will actually clean your registry, be aware, this program really is not free.

Most commercial registry cleaning programs will allow you to scan your registry for free. This free registry scan will be all this program will give you for free. However, it is enough for you to see if your registry is truly corrupt and if the problems you've been experiencing with your computer are due to something other than registry corruption.

In other words, if your registry is not corrupt you don't need to buy the cleaner. If it is, you will have to get your registry cleaned one way or another. So, at this point, it would be best to purchase this registry cleaner or at least, some registry cleaner.

Actually, there is no way to tell how good a registry cleaner is just because it tells you your registry is corrupt. However, the most reputable registry cleaners will not lie to you. I have tested many of them by cleaning my registry totally free of corruption and then running a free scan with another brand. Once in a while, I'll run into one that will tell me I still have registry corruption.

These of course, are not reputable cleaners. For instance, other registry cleaners, actually some big-name products, have scanned my recently cleaned registry and have honestly told me there is no or very little corruption present.

The moral to the story is, research which product you think you would like to purchase first. If it turns out the free scan indicates you do have registry corruption, you will already know this is a product you could trust and therefore, be willing to buy. It is wise to do this because there is a lot of room for chicanery with registry cleaner free trial offers.

The author, Hiel Strassman, is a computer engineer who has worked with Windows operating systems since Windows 3.11. He recommends you take a look at his site, Registry Fix Review to see how all the top registry cleaners rate against one another. Also, learn how to get a top registry cleaner and several other computer repair tools for the price of just a registry cleaner at Computer Repair Complete.