Monday, October 21, 2013

Windows server 2012 R2 New Features

New Features and Highlights
Using the new and enhanced features in Windows Server 2012 R2, you can improve performance and more efficiently use datacenter capacity, helping you increase business agility.

Windows Server delivers resilient, multi-tenant-aware storage and networking capabilities for a wide range of workloads using industry-standard hardware. By automating a broad set of management tasks, Windows Server 2012 simplifies the deployment of major workloads and increases operational efficiencies.

Storage
Organizations face increasingly large amounts of data that must be managed cost effectively. Windows Server helps you maximize your investments by getting better performance from your existing storage area network (SAN) infrastructure. It also delivers the ability to build enterprise-class storage infrastructure with commodity hardware.

Storage Spaces. Windows Server helps reduce costs and improve performance by consolidating standard disks into pools that can be treated as standard drives within the operating system. The logical disks, or Storage Spaces, can be configured for varying resiliency schemes and assigned to different departments. As a result, organizations can simplify isolation and administration of the storage infrastructure and improve performance, flexibility, scalability, and availability. With Windows Server 2012 R2, data is automatically tiered across solid-state drives and hard-disk drives based on usage patterns, to deliver the best performance for data that gets used the most.

Application support with Server Message Block (SMB) 3.0. By separating storage and compute elements of virtual machines, organizations can move virtual machines without impacting storage configurations. Windows Server enables this with SMB file shares for continuous availability using standalone file servers and clustered file servers. Storage can be managed with Storage Spaces and exposed as file shares for Hyper-V virtual machines and SQL databases. With SMB transparent failover, even if one of the nodes goes down, SMB transparently fails over to another node without downtime. Since SMB uses your existing network infrastructure, it also eliminates the need for a dedicated network.

Data deduplication. A new storage efficiency feature of Windows Server 2012 R2 helps reduce file storage requirements through variable-size chunking and compression. Windows Server will automatically scan disks, identify duplicate chunks of data and store those chunks once.

Networking
Networking enhancements in Windows Server 2012 R2 make it easier to virtualize workloads, improve security, provide continuous availability for applications, and get better performance out of existing resources. Networking enhancements also bolster network isolation, which is key to running multi-tenant environments. These enhancements can improve virtual machine density, mobility, and availability.
Comprehensive approach to software-defined networking. Windows Server 2012 R2 delivers several new capabilities for virtualized networks. With multi-tenant virtualization, datacenters can isolate tenant resources without the need for expensive and complex changes to the physical
network infrastructure. Hyper-V Network Virtualization in Windows Server provides a layer of abstraction between the physical networks that support the hosts, and the virtual networks that support the virtualized workloads. As a result, datacenters can handle multiple virtual networks with overlapping IP addresses on the same physical network and also move virtual machines across virtual networks without having to reconfigure the underlying physical network.
Using the multi-tenant Hyper-V Network Virtualization gateway capabilities in Windows Server, you can bridge virtualized networks with non-virtualized networks, service providers and Azure.

Hyper-V extensible switch. Window Server provides flexibility with advanced packet filtering and routing. The Hyper-V extensible switch offers an open development framework for adding layer-2 functionality such as filtering, monitoring, and packet-level redirection required by the application or tenant.
Network infrastructure enhancements. With automation, networks of virtualized data centers and cloud environments become more agile, dynamically scalable and dispensable, and able to enforce administrative controls. IP Address Management (IPAM) in Windows Server 2012 R2 implements several major enhancements, including unified IP address space management of physical and virtual networks, as well as tighter integration with System Center 2012 R2 Virtual Machine Manager (VMM). The IPAM feature provides granular and customizable role-based access control and delegated administration across multiple data centers. IPAM provides a single console for monitoring and managing IP addresses, domain names, and device identities. It also supports advanced capabilities for continuous availability of IP addressing with Dynamic Host Configuration Protocol (DHCP) failover, DHCP Policies, filters, and more.

For Further details downloadGuide

Sunday, June 10, 2012

De fragmentation


In a windows 2003 environment, if you low on disk space, or if you have deleted a lot of objects since you promote your DC. You should perform an offline degragmentation to the DIT file. As you might know, there are two type of defragmentation in windows 2003 family,
- online Defragmentation
- offline Defragmentation

Online Defragmentation

By default in windows 2003, the online defrag process runs every 12 hours on each domain controller. This process defrags the Active Directory database (ntds.dit) by combining whitespace generated from deleted objects, but does not reduce the size of the database file.
you can check the Directory Event log to see when the last online defrag was performed. See Figure-1 for NTDS online Defrag Event 700 (Online Defrag start) in Directory Service log indicates the online Defrag has started.
[AD-Tutorial]-How-to-perform-Active-Directory-Defrag1
Figure-1 NTDS online Defrag Event 700 (Online Defrag started)

upon completion of Online defrag, an event number 701 is logged. See Figure-2
[AD-Tutorial]-How-to-perform-Active-Directory-Defrag2
Figure-2 NTDS online Defrag Event 701 (Online Defrag Ended)

You can also manually start the online Defrag, here is how:
1. Start => Run => "LDP" to start up LDP
2. Select Connection => Connect, enter name of Domain controller with Default Port 389
3. Connection => bind, then enter admin credential
[AD-Tutorial]-How-to-perform-Active-Directory-Defrag3
Figure-3 Bind with Admin Credential
4. Browse => Modify,
   - Leave DN Blank
   - For Attribute, enter " DoOnlineDefrag"
   - Value enter Maximum time(SEC) the defrag should run, here we use  180
   - For Operation, choose Add,
   - click enter
[AD-Tutorial]-How-to-perform-Active-Directory-Defrag4
Figure-4 Add DoOnlineDefrag Attribute to trigger the online Defrag
5. click Run
Offline Defragmentation

Beside online defragmentation which is performed automatically at default interval of 12 Hours as part of Windows Maintenance(Garbage Collection), offline defragmentation create compacted version of database file which in most case will be considerably smaller. here is how you can start offline defrag:
1. you need to first reboot the server into DS restore mode, please see "How to perform a Nonauthoritative Restore to Domain Controller? "  for more detail on how to boot to DS Restore mode.
2. Check Directory Integrity, detail see "How to check Active Directory database integrity(DIT File's Integrity)? "
3.  Open up command prompt and type ntdsutil
4. goto files menu
5. issue command " compact to C:\TempFolder, you can use single command too:
ntdsutil files "compact to P:\TempFolder"
See Figure-5 for output of compact subcommand
[AD-Tutorial]-How-to-perform-Active-Directory-Defrag5
Figure-5 Compact to C:\TempFolder output
6. next you need to delete the Transcation Log files in the current NTDS directory.
   del C:\WINDOWS\NTDS\*.log 
7. Next to move the dit file in the temperory folder to original directory, it's recommended to rename the original DIT file and store in the temperory location to ensure nothing "Strange" happened to compacted DIT.
Below command is to move and rename the older DIT file to the temp folder
move c:\WINDOWS\NTDS\ntds.dit P:\TempFolder\ntds_old.dit
next, move new ntds.dit file to windows\ntds directory:
move P:\TempFolder\ntds.dit C:\WINDOWS\NTDS\ntds.dit
8. Now we need to run another integrity check of DIT file.(refer to step 2)

good backup

 

When to Restore

When an object is deleted in Windows 2008R2, the DC from which the object was deleted 

informs the other DCs in the environment about the deletion by replicating what is known as a 

tombstone(if the recycle bin isn’t enabled) or Deleted (with recycle bin). 

A tombstone or deleted object is a representation of an object that has been deleted from the 

directory. The tombstone object is removed by the garbage collection processes, based on the 

tombstone lifetime setting, which by default is set to 180 days by default in Windows 2008R2.

A Deleted object will be recycled after the “Recycle object lifetime”, which is by default equal to 

the tombstone lifetime, or 180 days in Windows 2008R2.

A backup older than the tombstone lifetime set in Active Directory is not considered to be a 

good backup.

Active Directory protects itself from restoring data older than the tombstone lifetime. For 

example, let’s assume that we have a user object that is backed up. If after the backup the 

object is deleted, a replication operation is performed to the other DCs and the object is 

replicated in the form of a tombstone.  After 180 days, all the DCs remove the tombstone as part 

of the garbage collection process. This is a process routinely performed by DCs to clean up their 

copy of the database. 

If you attempt to restore the deleted object after 180 days, the object cannot be replicated to 

the other DCs in the domain because it has a USN that is older than the level required to trigger 

replication. And the other DCs cannot inform the restored DC that the object was deleted, so the 

result is an inconsistent directory.

Friday, August 5, 2011

INFRASTRUCTURE MANAGEMENT SERVICES,

INFRASTRUCTURE MANAGEMENT SERVICES, Not Cloud Alone, There Are Other Tailwinds for Infrastructure Management

 

Market momentum created by the rising profile of offshore suppliers, maturing global delivery, productization of services and consulting offerings, and industry consolidation is rapidly shaping the growth for IMS


The <![if !vml]>Description: chrome://cooliris/skin/new/mouseover.png<![endif]>infrastructure management services market is holding strong. Even during the recession, the IMS market underwent a reinvention of sorts with the adoption of remote management and asset-light models. Many new service providers laid out offerings in the IMS area leveraging heavily on the global delivery model. Infrastructure management really took off in the ‘offshore’ sense in the last three years. The infrastructure management market is expected to touch $180 billion by 2013. This is accompanied by the forecast that global IT purchases will rise 7.1 percent in 2011 to $1.7 trillion and the growth is likely to be sustained, according to new forecast data from Forrester Research.
This is accompanied by the forecast that global IT purchases will rise 7.1 percent in 2011 to $1.7 trillion and the growth is likely to be sustained, according to new forecast data from Forrester Research.
Combined with advances in virtualization, utility-based computing, standards-based infrastructure,
data center transformation, and cloud computing, the outlook for IMS continues to be dynamic and exciting.
Industry Drivers
Leaders in IMS help buyers in transforming their IT from being a business support tool to one that adds business value. In terms of economics, IMS helps to variabilize their IT costs.
Main drivers for this market include companies´ focus in reducing operational costs and improving operational systems, the increasing confidence in medium/long term outsourcing deals after economy recovery and cloud computing concept, including virtualization technologies,” says Marcelo Kawanami, Frost & Sullivan ICT Industry Manager.
The market for IT infrastructure management services is evolving rapidly as enterprises are aiming at cutting down spends on buying and maintaining infrastructure resources internally.
This coupled with the ease of global delivery offered by infrastructure service providers is resulting in a surge of demand for outsourcing.
In the recent years, a major development has been the convergence of the Remote Infrastructure Management Outsourcing (RIMO) and traditional Infrastructure Outsourcing (IO) models. The RIMO model is gaining a wider acceptance with buyers, resulting in a larger number of high-value offshore infrastructure management outsourcing deals, including new engagements signed by traditional infrastructure suppliers
Source : GS100: 2011 Global Services Compendium

Monday, January 10, 2011

DHCP Server Icons - List

In DHCP Console We do have different icons re

Server-related icons

Icon
Description
Description: Server optionDHCP server added to console.
Description: Server options folder
DHCP server connected (active)
DHCP server connected and active in console.
Description: DNS server forwarding query per domain name
DHCP server connected (inactive)
DHCP server connected but not authorized in Active Directory for use on your network.
Description: Authentication across forests
DHCP server Not Loaded
DHCP server connected but current user does not have the administrative credentials to manage the server.
Description: Using IAS proxy for load balancing
DHCP server Warning
DHCP server warning. Available addresses for server scopes are 90 percent or more leased and in use. This means that the server is nearly depleted of available addresses to lease to clients.
Description: Forwarding domain name queries to DNS servers
DHCP server Alert
DHCP server alert. No addresses are available from server scopes because the maximum (100 percent) of the addresses allocated for use are currently leased. This represents a failure of the DHCP server on the network because it is not able to lease or service clients.

Scope-related icons
Icon
Description
Description: Scope options folder
Scope or Superscope Active
Scope or superscope is active.
Description: DHCP server connected (active)
Scope or superscope is inactive.
Scope or superscope is inactive.
Description: DHCP server not connected
Scope Warning
Scope or superscope warning. Scope warning: 90 percent or more of the scope's IP addresses are in use. Superscope warning: If any scope within the superscope has a warning, the superscope has a warning.
Description: Scope option
Scope Alert
Scope or superscope alert. Scope alert: All IP addresses have been allocated by the DHCP server and are in use. No more clients can obtain IP addresses from the DHCP server because it has no more IP addresses to allocate. Superscope alert: At least one scope contained in the superscope has all IP addresses allocated by the DHCP server. No clients can obtain an IP address from the range defined in the scope that is 100 percent allocated. If other scopes within the superscope contain available addresses, the DHCP server can allocate addresses from these scopes.

Option-related icons

Icon
Description
Description: Outsourced dial and using IAS proxies
Server options folder
Server options folder.
Description: Using a different authentication database
Server option
Server option.
Description: DHCP server added
Scope options folder
Scope options folder.
Description: DHCP server busy
Scope option
Scope option.
Description: Reservation option
Reservation option
Reservation option.

Other console icons

Icon
Description
Description: DHCP Root of the DHCP console.
Description: Address pool folder
Address pool folder
Address pool folder.
Description: Scope allocation range
Scope allocation range
Scope allocation range. Addresses in this range are allocated to the available address pool used to offer leases to clients.
Description: Scope exclusion range
Scope exclusion range
Scope exclusion range. Addresses in this range are excluded from the available address pool used to offer leases to clients.
Description: Active leases folder
Active leases folder
Active leases folder.
Description: Client, LAN-based
Client, LAN-based
Active lease: this address is not available for lease by the DHCP server.
Description: Client not registered
Client not registered.
Expired lease: this address is available for lease by the DHCP server.
Description: Client active and registered
Client active and registered.
Active lease, DNS dynamic update pending. This address is not available for lease by the DHCP server.
Description: Client using remote access
Client using remote access
Client is using dial-up network connection through a remote access server.
Description: Reservations folder
Reservations folder
An individual reservation and the reservations folder.
Description: BOOTP table
BOOTP table
Bootstrap Protocol (BOOTP) table.
Description: BOOTP entry in table
BOOTP entry in table
BOOTP entry in the table.

Sunday, January 9, 2011

SNMP

SNMP Monitoring: One Critical
Component to Network Management
Network Instruments White Paper
Although SNMP agents provide essential information for
effective network monitoring and troubleshooting, SNMP alone
does not provide all the information you need to stay on top of
your network. For comprehensive analysis of many issues, a
network analyzer with packet capture capabilities is required
as well. This white paper describes how SNMP works, the
advantages of SNMP monitoring, and how SNMP continues to
remain a critical part of a complete network analysis solution.


Overview

What is SNMP?


SNMP (Simple Network Management Protocol) is the common language of network monitoring–it is integrated into
most network infrastructure devices today, and many network management tools include the ability to pull and
receive SNMP information. SNMP extends network visibility into network-attached devices by providing data
collection services useful to any administrator. These devices include switches and routers as well as servers and
printers. The following information is designed to give the reader a general understanding of what SNMP is, the
benefits of SNMP, and the proper usage of SNMP as part of a complete network monitoring and management solution.
The Simple Network Management Protocol (SNMP) is a standard application layer protocol (defined by RFC 1157)
that allows a management station (the software that collects SNMP information) to poll agents running on network
devices for specific pieces of information. What the agents report is dependent on the device. For example, if the
agent is running on a server, it might report the server’s processor utilization and memory usage. If the agent is
running on a router, it could report statistics such as interface utilization, priority queue levels, congestion
notifications, environmental factors (i.e. fans are running, heat is acceptable), and interface status.
All SNMP-compliant devices include a specific text file called a Management Information Base (MIB). A MIB is a
collection of hierarchically organized information that defines what specific data can be collected from that
particular device. SNMP is the protocol used to access the information on the device the MIB describes. MIB
compilers convert these text-based MIB modules into a format usable by SNMP management stations. With this
information, the SNMP management station queries the device using different commands to obtain device-specific
information.

There are three principal commands that an SNMP management station uses to obtain information from an
SNMP agent:

1. The get command collects statistics on SNMP devices.
2. The set command changes the values of variables stored within the device.
3. The trap command reports on unusual events that occur on the SNMP device.
The SNMP management console reviews and analyzes the different variables maintained by that device to report
on device uptime, bandwidth utilization, and other network details.
SNMP delivers management information in a common, non-proprietary manner, making it easy for an administrator
to manage devices from different vendors using the same tools and interface. Its power is in the fact that it is a
standard: one SNMP-compliant management station can communicate with agents from multiple vendors, and do
so simultaneously. Illustration 1 shows a sample SNMP management station screen displaying key network statistics.
Another advantage of SNMP is in the type of data that can be acquired. For example, when using a protocol
analyzer to monitor network traffic from a switch's SPAN or mirror port, physical layer errors are invisible. This
is because switches do not forward error packets to either the original destination port or to the analysis port.
However, the switch maintains a count of the discarded error frames and this counter can be retrieved via an
SNMP query.
1
www.networkinstruments.com

Why use SNMP?
SNMP can be used in any environment where constant monitoring of key devices is required. Many SNMP
management stations offer long-term reporting capabilities, allowing an administrator to watch network trends
develop over time and to take appropriate action before problems can seriously affect users. Illustration 2
shows a sample report illustrating maximum, minimum and average router utilization.
Triggered notifications are also available from many SNMP management stations. Notifications allow the
administrator to receive an e-mail or page if certain user-defined thresholds have been exceeded, such as
maximum port utilization.
www.networkinstruments.com
2
Where should you use SNMP?

SNMP can be used in any environment where constant monitoring of key devices is required. Many SNMP
management stations offer long-term reporting capabilities, allowing an administrator to watch network trends
develop over time and to take appropriate action before problems can seriously affect users. Illustration 2
shows a sample report illustrating maximum, minimum and average router utilization.
Triggered notifications are also available from many SNMP management stations. Notifications allow the
administrator to receive an e-mail or page if certain user-defined thresholds have been exceeded, such as
maximum port utilization.www.networkinstruments.com
3
What is missing from SNMP?
While SNMP provides excellent statistics on the macro level, it does not provide the level of detail that is often
required to completely resolve many network issues. For example, while SNMP may show high utilization on
the router’s Internet interface, it may not show what kinds of traffic are using up the bandwidth or who is
responsible for the traffic. This leaves the administrator knowing what the problem is (high bandwidth
consumption to the Internet), but not knowing the cause, and therefore, lacking the ability to quickly resolve
the issue. Illustration 3 shows how a network analyzer’s Top Talkers view with detailed analysis capabilities
can assist in in-depth problem solving scenarios. By reviewing the network’s Top Talkers (who is causing the
traffic), the network administrator can isolate the cause of the excessive utilization and take steps to
resolve the issue. This deeper level of detail is not found inside an SNMP management console. However a
network analyzer with SNMP management capability can offer the full view of the fundamental network issue.
Make no mistake-SNMP monitoring should be a part of any network management solution. But effective
administration of enterprise networks requires more than SNMP management. Only a comprehensive network
analyzer can deliver both in-depth analysis along with the ability to manage and view statistics from SNMPcompliant
devices. When selecting a network analyzer, choose a solution that provides full network coverage
for multi-vendor hardware networks including a console for SNMP devices anywhere on your LAN or WAN.
Also, look for a solution that includes a network mapping program that can help you visualize the network by
continually monitoring and displaying device and route statuses. In addition, the network analyzer should report
information about services running on the primary devices. This information is important to an administrator of
a single site, and invaluable to an administrator who is responsible for multiple sites. Often, the network
mapping program is integrated with the SNMP management station, allowing the two systems to share
information. This is accomplished by using the network mapping tool as a first step, SNMP as a high-level
drill down, and finally a network analyzer for deeper level statistics and information.
A comprehensive network analyzer also includes a packet decoding and analysis tool. Providing the additional
depth that SNMP management lacks, a network analyzer allows you to look beyond simple statistics into the
actual frames being transmitted across the network. While network analyzers vary greatly in their feature sets,
some of the primary functions you should look for in addition to packet capture and decode is some form of
Expert analysis for advanced problem identification and resolution, long-term reporting capabilities, and
triggered notifications. These features can provide ongoing insight into the day-to-day operations of the
network, at a level beyond the scope of SNMP. Figure 1 is a checklist designed for any network administrator
to review when choosing a comprehensive network management solution.

SNMP – A Component of Total Network Management
www.networkinstruments.com
4
Conclusion

SNMP management provides valuable insight to any network administrator who requires complete visibility
into the network, and it acts as a primary component of a complete management solution. However, SNMP
was never intended as a comprehensive network monitoring solution. It therefore must be complimented by a
complete suite of network monitoring and management tools. You should not have to choose whether you want
to review network traffic or network devices. For complete visibility, choose a solution that provides both. When
shopping for the right network analyzer for your network, consider a comprehensive solution for complete
coverage.

Building a NOC

Expert Network Operations Centre development can be a daunting task. What monitoring software should be used? What type of server should it run on? What equipment should be monitored? What access rights should the NOC Techs have to the monitored equipment? What monitoring, troubleshooting, problem escalation and best practices policies and procedures should be put in place? Who will design the NOC? Who will build it? Who will take care of daily operational NOC maintenance? Who will manage The NOC Techs and adherence to procedure?

Building NOCs using even relatively inexpensive monitoring products such as HP Open view Operations are not trivial to design and deploy. And despite what Sales people will tell you, there is no monitoring product that is all things to all people. Vendor competitors would like you to think that their solution is the Holy Grail itself but the reality is that no matter what products you choose, the capabilities you will get will be far less than what you were expecting. And just the regular "care and feeding" of your monitoring environment once it's operational will be a far more involved, complicated, expensive and time consuming process than you ever imagined. So how do you avoid costly mistakes in the product selection, deployment and operational phases of your project?

Consider having some or all of your project handled by someone who has a lot of Professional NOCs under their belts already. And preferably, a Company that has no financial stake in the monitoring software solution purchase selection.

Vigilance Monitoring builds very high end, comprehensive Network Operations Centers using state of the art monitoring products such as NetIQ, BMC Patrol, HP Openview Operations, HP Network Node Manager NNM, IBM Tivoli, Nortel Optivity NMS, Cisco Ciscoworks, Sun Solstice SunNet Enterprise Manager, Micromuse, Computer Associates CA Unicenter, Microsoft Operations Manager 2000 (MOM), Microsoft Systems Management Server SMS and competitor software.

Vigilance Monitoring is not a software reseller or a VAR so we are free to recommend and use whatever monitoring solution is best for you, not just the products that we get commissions on. We are the only company in the whole wide World that we know of with over 15 years’ experience making a business out of designing, building and managing professional NOCs. We can identify, install, implement and manage just the right monitoring product(s) to fit your monitoring needs and IT budget.