Saturday, January 29, 2011

How to mount VHDs from command-line on Hyper-V Server 2008 R2

I've installed and am getting familiar with the free Hyper-V Server 2008 R2, but I'm having trouble figuring out how to perform local management of VHDs from the command line. The built in command line tools seem to only configure the host OS, but not allow mounting / managing of VHDs.

Does this product require another machine to manage it remotely? Are there built-in command line tools which allow this sort of management? Are there freely available tools which do?

NOTE: this is not Windows Server 2008 R2, this is the standalone Hyper-V Server 2008 R2.

  • Hyper-V Server (unlike Windows 2008 with Hyper-V) requires a remote machine - you run Hyper-V Manager on the remote machine and point it at the Hyper-V Server and do your management that way. The remote machine doesn't have to be a server - you can use Vista or Windows 7.

    McKAMEY : This makes sense, esp. since it is a free offering.
    ObligatoryMoniker : +1 for recommending the use of Hyper V manager from a Windows 7 machine. I do this all the time and it works great.
    From
  • Stand alone Hyper-V Server is meant to be remotely managed, just as VMware's ESXi is. The local console has quite a lot of (potential) functionality but it is based on Windows Server 2008 Core (to some degree at least - Microsoft are quite adamant that this is not a "Windows" OS) and much of the local functionality is either disabled or not installed by default because of the focus on remote\centralised management (via SCVMM,RSAT etc). That focus makes sense - systems like Hyper-V Server are intended for use in environments where everything is remotely managed, it's highly unlikely that you will be seriously running these headless Hypervisors in an environment where you don't have a suitable system where you can install the remote management tools.

    That said it is possible to enable PowerShell locally on Hyper-V Server 2008 R2 and with that you get the ability to do most of what I think you want to do. You will probably need to install additional components in order to extend the standard functionality and allow you to create and manage VM's, manage storage and virtual networking. There are probably other ways to do this too as all of the functionality you are looking for is available programatically via WMI.

    So to answer your question, no a remote machine is not required but you will have to jump through some hoops.

    McKAMEY : Thanks for the link! That looks like it might be exactly what I need. It makes sense that they wouldn't make it an easily compelling alternative to Win2K8. I'm okay with jumping through hoops, to try this out without needing to invest a ton in licensing before I'm really sold.
    Helvick : The other potentially low cost option is the client based RSAT tools for Windows 7 ( http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en ). I don't know if this version of RSAT will work with Vista or if you have to use the earlier Vista version but it will not work on XP or earlier.
    McKAMEY : Once PowerShell is enabled, the PowerShell Management Library for Hyper-V http://pshyperv.codeplex.com appears to be the next step.
    From Helvick
  • If you enable PowerShell and remote management from the Hyper-V Server command-line utility, you can then use PowerGUI with the Hyper-V PowerPack to manage VMs/VHDs:

    1. Download PowerGUI
    2. Import the Hyper-V PowerPack
    3. Watch the Hyper-V PowerPack demonstration video
    From McKAMEY

Emails sent through SMTP on VPS are considered to be spam

During our business we have to make regular mailing to our clients: invoices, information emails, etc.

Previously we received and sent emails using mail server of our hosting provider. But as the number of clients increased, we have to order VPS and install our own SMTP server their for performing our mailings.

So, now we have default provider mail server for receiving emails, let it be business.com. We have email accounts like info@business.com, etc. We use this mail server to receive emails and manage our email accounts.

And we have SMTP server which is running on VPS. We use this SMTP only for sending emails with From addresses like info@business.com. VPS has default DNS records created by provider, let it be IP.AD.RE.SS <-> ip-ad-re-ss.provider.com.

Mailings are made using either desktop email clients or custom Java-based application which uses JavaMail for sending emails.

The problem is that most of emails sent by us are placed in spam folders in clients email accounts. Clients have their email in Gmail, Yahoo, Hotmail, etc.

Could you please tell what is the most probable reason and solution of described problem?

Are there any service in Intranet where we can send test email and get an answer with description why this email could be considered to be spam?

  • Most likely a reverse DNS issue. The only way to know for sure is to see the header information of the emails in the junk/spam boxes of your clients. Your VPS provider should be able to setup a reverse DNS record for your domain.

    Ilya : Do you mean record for our VPS IP? Reverse DNS has record. IP.AD.RE.SS is resolved to ip-ad-re-ss.provider.com successfully.
    Ilya : By the way, do mail providers fill headers of emails which are marked as spam with information why these emails were considered spam?
    xeon : Yes. The provider will fill headers with information that can be used to determine the root cause. You should see something like X-Spam-Status and that will give you the reasons why it was marked as spam and the score.
    From xeon
  • You need to make sure that your email server is setup properly so that these mails are not marked as spam. As mentioned, one of the most common reasons is Reverse DNS. Most of the big providers require that you have a correct RDNS pointer record setup for your mail server before they will receive mail for you.

    You also want to check that the IP your provider has given you has not been blacklisted, use a facility like this one to check. If it is on the list, if it's a new IP you have been given you can probably get your provider to give you a new one, if you've had it a while then it will be harder to prove that you are not the ones who got it blacklisted.

    Ilya : Thanks, we checked IP. It is not blacklisted.
    From Sam Cogan
  • Try this Email Server test. It does a whole load of checks to see why your emails could be labeled as SPAM

    Ilya : Thanks. We followed its recommendations and hope that they will help us and there won`t be problems during our next mailing.

init.d script not working .... but the command works if I execute it in the console

I have a command that is working fine if I executed it from the command line ... but when I put it in an init.d script it wont's start (well .. it starts but have a behavior different from that when it is run directly).

Any idea why this is not working on the init script ?

The command is : bluepill load /var/www/html/bluepill.conf

And the init.d script is :

    #!/bin/sh

    ## Based on http://www.novell.com/coolsolutions/feature/15380.html
    # chkconfig: 345 99 1
    # processname: solr
    # Provides: bluepill
    # Default-Start: 3 4 5
    # Default-Stop: 0 1 2 6
    # Short-Description: bluepill daemon, providing process monitoring
    # Description: Bluepill

    # Check for missing binaries
    BLUEPILL_BIN=/usr/local/bin/bluepill
    test -x $BLUEPILL_BIN || { echo "$BLUEPILL_BIN not installed";
            if [ "$1" = "stop" ]; then exit 0;
            else exit 5; fi; }

    # Check for existence of needed config file and read it
    BLUEPILL_CONFIG=/var/www/html/bluepill.conf
    test -r $BLUEPILL_CONFIG || { echo "$BLUEPILL_CONFIG not existing";
            if [ "$1" = "stop" ]; then exit 0;
            else exit 6; fi; }

    case "$1" in
      start)
        echo -n "Starting bluepill "
        $BLUEPILL_BIN load $BLUEPILL_CONFIG
        ;;
      stop)
        echo -n "Shutting down bluepill "
        $BLUEPILL_BIN quit
        ;;
      restart)
        ## Stop the service and regardless of whether it was
        ## running or not, start it again.
        $0 stop
        $0 start
      ;;
      *)
        ## If no parameters are given, print which are avaiable.
        echo "Usage: $0 {start|stop|restart}"
        exit 1
        ;;
    esac

Update (to answer few questions) :

I also added the script in order to be executed at boot time using :

chkconfig --add bluepill_script  
chkconfig --level 345 bluepill_script  on  
  • ok.. dumb question but did you set the script to start at bootup? I'm more familiar with debian style distros but ntsysv or chkconfig might be what you need.

    massi : yes ... see the update
  • I'll echo Kamil's call for output when run.

    Furthermore, have you tried chkconfig --add bluepill and chkconfig bluepill on.

    Otherwise, I'm betting it's some sort of environment variable in the script. Try sourcing an environment at the start via . /etc/profile or the like. Especially since this looks like it's installed in /usr/local/bin. It may need PATH or LD_LIBRARY_PATH set properly.

    massi : As I said in a response to Kamil Kisiel's comment, even after the server is started ... when I try service bluepill_script start it seems to work (no error is displayed) but it's not doing its job
  • Another dumb question, is bluepill loaded in memory after the script is run? ps -ef | grep bluepill

    From Jmarki
  • try adding

    PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
    

    to the top of the init script.

    massi : Thanks Justin !!! it is now working just as expected !
    massi : Justin, why this is needed to be in the script ?
    Justin : because /usr/local/bin/bluepill is likely trying to start more programs located in /usr/local, which it is unable to find. /usr/local/bin/bluepill might be a shell script, you can try reading it.
    From Justin
  • The lack of errors in the log is no clear indication that the init script is working. Two simple steps will debug this.

    1. Add some debugging details inside the script. I like to write the line number to a log file from various points with the script. No output? Then it's almost certainly not being run.
    2. Ensure the script has in fact been enabled to run when you think it has, as several others have already stated. If it's not being run that could account for the lack of error messages in the logs.

VSphere network setup assistance

Hi all,

I just purchased an HP Virtualization bundle and am trying to figure out how to get the vsphere networking properly configured. I've read a ton of documents on the web etc. and VMWare's site etc. but I need some help with this.

Here's what I have hardware/software wise:

2 x HP DL180 servers with two onboard gigabit NICS and one quad-port gigabit NIC. 1 X HP 2510G 24 port switch

1 X Vsphere ESX Essentials Plus license

1 X Hp Lefthand Networks VSA license

I managed to get the HP servers up and running with ESX installed and operational. I also conffigured the VSA according to some materials I found on the web and that works great too.

My problem is that I really don't know what i'm doing in terms of the virtual networking config in VSphere. I configured everything so it's using my production LAN IP's just so I could see how it works and play with it. I plan to reload everything and start from scratch once I understand what i'm doing with the nics/switches and virtual networking etc.

Your guidance and help is greatly appreciated.

thanks,

-- SL

  • This depends a lot on what your goals are... Do you need to have different VLANs for your VMs? How about redundancy. Are all 6 NIC ports going to be hooked to the same switch? What happens if that switch dies? A good suggestion is to split the NIC ports between at least two switches.

    Concerning the setup of the vSwitch, you can put 1 or more NICs on a vSwitch and then that switch can be used as the attachment for your VMs. This will give you multiple paths out of your ESX box for each VM. It can both increase your bandwidth (depending on the HP2510G capabilities) and your redundancy.

    You also can just assign one physical NIC to each VM, but then you would be limited to 4 VMs in your current config as you want to have 2 NICs dedicated to the console. 2 NICs are not required, but if the link is dead on the console's NIC(s), then ESX will not boot. Learned that one the hard way at 4AM.
    Here are some good articles on VMWare's site:

  • No, I don't need to have different VLANs for the VM's. We're a small environment here, our production LAN is running off a single Cisco 48 port gigabit switch. this is a big change from where I worked previously, but here if the network goes down - we go home :) Anyway, I would of course like to have it operate as issue free as possible - but I'll have to work with what i've got. As there are six nics in total, I was thinking that this nic config would work well, what are your thoughts? : Two nics for service mgmt console Two nics on seperate vswitch for VM's Two nics for VSAN (should I create a seperate vswitch for that too?) Just not sure how I should set it up and how to get there?!

    thanks,

    Scott Lundberg : If the VSANs are Fibre Channel based, then they are configured in the Storage configuration, not the network configuration. I believe iSCSI SANs are also configured in storage, but I don't have an iSCSI device to confirm that. Other than that, your suggested configuration should work fine. Just so you are aware, you may run into some problems if your hardware switch can't do a good job of trunking. Also make sure you configure the vSwitch to use IP hashing as its means of determining which NIC to use when sending packets.
    Scott Lundberg : No problem... That should work, but make sure that you, at a minimum, separate your SAN and IP networks with a VLAN. You can do this through the vSwitch and then set the corresponding VLAN on your hardware switch... Are you using the HP for the SAN network and the Cisco for your LAN network?
  • Yes, HP switch is for SAn network and Cisco is production LAN. I've read a bit about enabling jumbo frames, should I do this as well? thanks.

How we build private cloud computing in my organization

I'm young admin in some organization, I would like to know how we prepare private cloud computing. Like what cloud computing consist of ?

Regards

Pond

  • "Cloud" computing is a pretty vague term, so I'm not sure quite what you're looking for. One of the common uses for the term is a large network of servers that can be provisioned to do a variety of different tasks based on changing demands. For that use, you can try Ubuntu Enterprise Cloud, which is designed to be compatible with the Amazon EC2 APIs but allow you to run your own private cloud.

  • Generally by definition, if the computing resources are in your organization they're not necessarily "cloud." That said, if the IT cost center is chartered with providing multiple business units servers and services, you can consider your own "cloud" in provisioning computing resources that you would thus sell and manage internally.

    "Cloud" could be a bunch of VPC instances to exposing mainframe applications.

    The point is that your end-user doesn't know nor care how you manage your Service Level Agreements or hardware, just that the service is provided. Thus the arbitrary "cloud."

    From Xepoch
  • Cloud computing is not so much a technology but a system built around four basic concepts:

    • Abstraction - the user does not need to know the underlying hardware that is running the system
    • Elasticity - resources can be added and removed from the system easily
    • Democratisation - users can allocate more or less resources without needing administrator assistance
    • Utility pricing - users can get resources without upfront capital outlay, simply paying a monthly fee for resources used, like you do with utility bills (electricity, etc)

    There are many vendors who have cloud computing solutions that provide the benefits associated with the implementation of these concepts: Microsoft Azure, Amazon Web Services, Google Apps, RackSpace, etc.

    This works well for a cloud vendor supplying solutions to multiple customers. Where Private Cloud Computing falls down inside the firewall is that it can have Abstraction, but it will only be Elastic up to the amount of resources you can devote to the system. Democratisation can certainly move the management of the resources to the consumers of those resources but you need to have over-capacity in your system to have these resources free. Obviously, these resources need to be bought upfront with enough over-capacity to meet anticipated needs so you have capital investment and don't get Utility Pricing.

    If Abstraction and Democratisation are your biggest requirements for your system and your security and privacy concerns outweigh the cost involved in building a system with enough capacity to allow sufficient Elasticity for your user's needs, then private cloud could work for you.

    I don't know of any cloud vendors that will allow you to run their solutions privately rather than hosted on their servers. Ubuntu Enterprise Cloud has the open source Eucalyptus system which is compatible with Amazon EC2 API but you should make sure it is capable of running a production system in the manner you want it to before commiting to this platform.

    gbjbaanb : I don't see why internal clouds cannot be elastic - after all, Amazon runs their business on their cloud (which is why they created it - that had all the spare capacity they don't use until Christmas)
  • You aren't very clear with your question, but if you are looking for places that provide a swath of services and support:

    Amazon Web Services handles a lot of infrastructure prices at a premium: http://aws.amazon.com/

    Microsoft also offers Azure: Microsoft.com/WindowsAzure

    Both of those will require a lot of technical set-up. If your business does not absolutely depend on using technology like this, then you should just work on improving your local systems.

    If you are just looking for what people mean when they say "cloud computing", you should check wikipedia.

  • I am also looking for a solution that will enable me to build a private cloud in-house. My main goal is to optimize the usage of my resources, including servers, power, space, and manpower. By giving my internal customers a way to provision their own services, I can significantly reduce the strain put on our IT department staff.

    However, I also want to build an infrastructure that gives me the capability to scale onto public cloud resources if the need arises. Some of the services we are running need to be public-facing. If there is a spike, it would be nice to be able to scale onto a public service such as AWS.

    The Ubuntu Enterprise Cloud (UEC) is certainly a solution I looked into and seems capable. Another option you should look into is AppScale (code.google.com/p/appscale/), which allows you to run Google App Engine application within EUC, Xen, KVM, or EC2.

    But the platform I've found to be closest to fitting my needs is OpenQRM. I suggest you take a look at it to see if it will fit your needs. I'm just getting started with it myself, but as I understand, it can also work with UEC and Amazon AMIs. It appears to have powerful support for many virtualization and storage technologies.

    Virtualization technologies: VMware, Xen, KVM, and Linux-VServer with physical to virtual, virtual to physical, and virtual to virtual migrations.

    Storage management: NFS, Iscsi, Aoe/Coraid, NetApp, Local-disk (transferring server-images to the local-disk), LVM-Nfs (NFS on top of LVM2 to allow fast-cloning), LVM-Iscsi (Iscsi on top of LVM2 to allow fast-cloning), LVM-Aoe (Aoe on top of LVM2 to allow fast-cloning)

    Take a look at the OpenQRM features list for more details. I hope this helps.

    P.S. Sorry for not including links to everything, but ServerFault doesn't allow me to include more than one link because I don't have enough reputation yet.

    Brett McCann : You might consider CloudIQ Platform from Appistry. It can be used to manage an internal Private Cloud but can also run on Amazon. You can even run a hybrid cloud that spans both.
    From Tauren

Ubuntu Server Hotswap RAID 1: Hardware or Software?

I'm building new servers for a project I'm working on. They will all run Ubuntu Server x64 (10.04 soon) and require a RAID 1 hotswap configuration (just two drives) to minimize downtime.

I'm not worried about Raid performance. The server hardware will have plenty of CPU power, and I'm only doing a RAID 1. My only requirements are:

  1. Everything, including the OS, must be mirrored.
  2. There must be no down-time when a drive fails. I need to be able to swap out the failed drive with another and have the RAID rebuild itself automatically (or maybe by running a simple script).

I'm wondering if the built-in Ubuntu Software RAID can handle this, particularly the hotswap part. 10.04 looks promising.

I'm considering buying the 3Ware 9650SE-2LP-SGL RAID controller, but with the number of servers we're purchasing, that would increase the total price quite a bit.

Any advice at all would be appreciated. Thank you.

  • Are you sure you need RAID?

    I only ask because you said with the number of server you are purchasing, that controller will increase the cost a lot. So with this many server for a single project, aren't a lot of these servers redundant? Perhaps you would save more money by not getting all those second hard drives?

    Andrew : Each server will need to be highly available. If at all possible, we need to avoid even one of them being down for an entire day while it's being rebuilt.
    Kyle Brandt : Ah, well then, you are sure :-)
    Andrew : You posed a valid question. I did say *any* advice would be appreciated. :-)
  • I have hot swapped drives using the software RAID builtin into the Linux kernel on many occasions. You may need to run a command to add the new device. I believe it is possible to make it automatic, but in the places where I use it manually running the command to add the new drive has never been a problem.

    I am not entirely certain that the computer will survive with zero downtime. That may depend on your hard drive controller and how it responds to a drive failure.

    Kyle Brandt : +1 For actually answering his question :-)
    Andrew : Thanks for that answer. So what I gather from what you're saying is that to guarantee zero downtime, I would need the hardware RAID, but it could be possible with software RAID depending on the motherboard's drive controller. Is that about right?
    Zoredache : In one HD failures I have seen a HD go bad in a way that the hardware RAID controller got messed up to the point that the OS crashed and a reboot was required. But that happened several years ago. I haven't seen that happen recently.
    Andrew : Ok, so maybe not guarantee, but close enough.
    Zoredache : Also, do make sure you verify that the motherboard controller supports hot swapping.
    Andrew : The motherboard chipset is Intel Tylersburg 5500 with ICH10R, so it should handle hot swap. I think I'm going to recommend the hardware controllers if my company can afford it, but I feel much safer with a software RAID now. Thank you. You've been very helpful.
    From Zoredache
  • I think the other posts have answered the question but I have a somewhat related thought to add.

    Since uptime is important in this application. Make sure you're using Puppet and Kickstart for setting up and maintaining the configurations on the servers. Also make sure you have a good backup solution....rsnapshot works pretty good.

    The hardware should be pretty replaceable cogs once you're dealing with any sort of scale of computers. Because you'll eventually have to deal with the following situations....you need a plan on how you're going to deal with them now. Not when it happens.

    • Even with redundant power supplies, raid, etc machines will fail in time.
    • The situation where a client starts to outgrow the hardware they are on....if all the clients are on separate hardware as your dialog to some of the answers to seem to imply.
    • Hardware replacement. In 5 years or so you'll want to replace hardware.
    Andrew : Thank you for that advice! I had never heard of Puppet or Kickstart before. I've written my own Bash scripts for setup, but looking into those now.
    3dinfluence : Also look at preseeding which I believe is the official Debian/Ubuntu way to do it. Although I believe that kickstart also works. http://wiki.debian.org/DebianInstaller/Preseed

Will Windows 2008 or Windows 7 interfere with a network of Windows 2003 servers

Our network is currently comprised of Windows 2003 servers and Windows XP workstations. Within the next few months, our PC deployment group plans to start rolling out workstations with Windows 7. In addition, one of our vendors just sent us an e-mail stating that their next version of the software due out in the next few months will require Windows 2008 and no longer support Windows 2003.

Now, our system administrator is telling us that Windows 7 and Windows 2008 will interfere with our network because our main servers (Active Directory, DNS, etc) are Windows 2003 servers. They will force master browser elections and so on...

Is he correct? Will they interfere? I would have thought this would be a larger issue if this was indeed the case. Links to any documentation regarding this matter would be appreciated.

  • Provided your new servers are only member servers and nothing fundamental to the operation of your domain, you won't have any nonsense at all. Speaking from experience, my first 2008 R2 file server went into my environment (which was totally 2003) without a hitch.

    Even if you decided your new 2008 servers were going to start hosting Active Directory and all its associated stuff, it would cause you no more real grief than any other major change like this on 2003.

    People knock Microsoft so often, but they really DO go to ridiculous lengths to ensure application compatibility. This is especially true of Microsoft software, but still a great deal of compatibility work goes into ensuring your legacy apps keep working too. After all, if stuff didn't work with newer versions of Windows, what's the point of upgrading at all?

    From Ben
  • Is your sysadmin stuck in 1996? He's worried about Master Browser elections in a Win2k3 AD? Ask him for specific documented problems - it's possible that you may have some serious legacy applications, but if that's the case you should have a WINS infrastructure and lock down the NetBIOS NodeType on the machines with GPOs (or maybe DHCP.)

    Or he may be referring to something else when you say he said "...and so on..." ; like I said, press him for details and then research those details. Because in general, the scenario you describe is not a problem.

    NYSystemsAnalyst : How would legacy applications affect this, and how does WINS and NETBIOS relate to this? Forgive me ... I have a solid, basic understanding of networking, but I'm not a network admin. I have pressed for specifics, but received little. I'll keep at it. Any idea where this fear that newer versions of Windows are going to take over the network might emanate from?
    mfinni : It has nothing to do with networking per se, it has to do with Windows. In Windows infrastructures that predate AD, they used WINS or NetBIOS broadcasts for name resolution. AD depends strictly on DNS. So, unless you've got old legacy applications that have a dependency on non-DNS name or service resolution, your sysadmin is showing his age (and specifically his inexperience with current technology.)
    mfinni : Additional details: newer versions of Windows will tend to win browser elections via NetBIOS broadcasts. If you have old apps that depends on NetBIOS, you should be using WINS and not allowing the client machines to be holding elections to begin with.
    From mfinni
  • We have run pretty much all flavors of windows servers and clients in a 2003 level domain. No problems related to that at all. Separating our group from another we set up a separate 2008 domain and migrated 2003 R2 and 2008 servers to it. No issues there either. In my experience AD is one of the things that MS has got right.

    Migrating AD from 2003 to 2008/2008 R2 is well documented and MS and sites all over has lots of articles and how-to's on that. I recommend you dig out some nice documentation and get up to date.

    From Gomibushi
  • Master Browser only affect NetBIOS resolution, which should not be an issue in an Active Directory environment in the first place, as you should be relying on DNS instead of NetBIOS. If you do not trust DNS somehow, you should set up WINS server (maybe on the domain controller) and point all your computers to the WINS server via DHCP.

    No matter what, you want to replace your master browsers with WINS server to cut down on vulnerabilities and network broadcast (bad security + bad network performance). My network has Windows 2003, Windows 2008, XP, Vista and 7, and we do not have any issues.

    From jackbean
  • In one way, introducing Windows 2008 and Windows 7 will interfere with master browser elections in NetBIOS. That being said, this is not an issue for many different reasons.

    First, you need to know that master browser selection is based on domain controller status and Windows version (this is where 2008 comes in to play). The PDC emulator will always win master elections no matter what OS level they are at. For broadcast domains where the PDC emulator role is not running, domain controllers will try to become the master. Finally, if no DC is available then the highest Windows version will win the election.

    With that being said, very few applications rely on NetBIOS these days. Almost all Microsoft applications and third party applications will resolve through DNS, not NetBIOS.

    Additionally, anyone relying on NetBIOS will have WINS deployed on the network. WINS removes the reliance on master browsers and allows NetBIOS to operate across subnets. Windows clients configured with a WINS server will default to H-nodes which means they query WINS before doing a broadcast for name resolution.

    Once that is all said and done, if you still needed NetBIOS, did not run WINS and had no domain controllers on a network, introducing Windows 2008 or Windows 7 would allow them to become master browsers on your network. Even if that were the case, Windows 2008 and Windows 7 will happily act as master browsers without any negative effects. If you still didn't want that to occur, then simply set the MaintainServerList and IsDomainMaster to No and False respectively. http://technet.microsoft.com/en-us/library/cc959923.aspx

    /me steps out of time machine...

    From Doug Luxem

Where is the best place to download old versions of MySQL server

I'm restoring an old server, and I need a copy of MySQL 4.1 for windows. MySql.com only seems to have versions going back to 5.0. I've seen old versions elsewhere, but they all seem to be from fairly sketchy websites. Is there a good unofficial place to get old versions of MySql or other software?

  • With a bit of searching?

    From
  • You can get it from here:

    ftp://ftp.fu-berlin.de/unix/databases/mysql/Downloads/MySQL-4.1/

  • ftp://ftp.fu-berlin.de/unix/databases/mysql/Downloads/MySQL-4.1/mysql-4.1.22-win32.zip ^^

    From henchman
  • If you look at their tech support page, you can officially want an old version of mysql. This all depends if its worth it to you than to download from those sketchy places.

    link text

    quote: 2 - Sign up for MySQL Vintage Support For those customers where upgrading MySQL is not an option, MySQL Vintage Support provides ongoing support for older MySQL versions beyond their EOL date. With MySQL Vintage Support you can continue using MySQL versions, beyond their EOL dates, and continue to receive 24x7 Support, and access to pre-existing patches and KB articles. Additionally, with MySQL Vintage Support, you can also contract for custom bug fixes and custom builds for EOLed products.

  • Here is what I ended up doing. So it turns out, that a lot of mirrors MySQL uses for current versions still have old versions sitting on their servers. So, what I did was pretend like I was going to download a newer version, then I got a URL of mirror from the download page from there and explored the mirror manually, finding binaries for 4.1

  • http://www.OldApps.com. It doesn't have MySQL 4.1, but you did say "or other software."

Random server lag, no CPU/mem/pagefile usage

We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server.

No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.)

I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

  • I'm thinking something to do with the network or Internet for this issue since you say CPU, Mem, and Page don't seem active...

    Check your Windows Update settings for your server and home user on XP. I notice on my system at home that if updates are set to automatically download that when they do download sometimes the system really lags until those downloads finish.

    If that doesn't seem to help, check for NIC activity. I'm not sure what a good method for doing this would be. You can run Performance Monitoring and when the system seems slow check NIC activity in that. If you see spikes during lag times, you could then do a capture with Wireshark to see what traffic is going out/in of that server.

    If both of those don't seem to help, I'm not sure what else you could check for. Faulty hardware somewhere along the line would be my next guess.

    Kev : WU on the server is set to manual at the moment. Thanks, I'll check out Wireshark. (I hope it's not hardware! We just got this box... :-| )
    From Webs
  • Have you maybe got an over-zealous anti-virus doing some on access scanning of a large file somebody might be accessing/saving?

    It could also be slow disk I/O - what's the server doing, and has it got anything installed that might be saving large files or something similar?

    Process Explorer is absolutely fantastic at getting a quick view of CPU, memory and disk activity.

    Kev : No real-time AV, just scheduled after-hours ClamAV at this point. I don't think the server's doing anything out of the ordinary, just its normal file/AD/db services. I was thinking of Process Explorer, except that it can chew up RAM if you leave it running too long, and unfortunately, by the time the system is responsive enough for me to actually launch a program, it's too late.
    Webs : One would think an AV scan would generate CPU and at least Mem usage.
    Kev : Oh, I was thinking of ProcMon. I'll give PE a shot.
    Kev : Sometimes I can get Cobian 9 to take up 25% for brief periods, and during this it seems pretty laggy...but it has happened without Cobian open before.
    Kev : (BTW, that's not during a backup, that's just editing the configuration.)
    From Ben
  • Was digging around for an answer to the same type of thing. Nice fast server but took forever to pull up anything (especially related to AD), users complaining of "Exchange is attempting to receive data" type messages, and with no server load at all.

    Found another article that pointed to DNS. Sure enough, primary forward lookup was the PREVIOUS server which no longer exists.

    Deleted that entry and bam. Snappy again.

    Now when I pull up a user account out of ADUC it happens in less than a second as opposed to 5.

    Not the first time I've heard that DNS is everything on a server.

    Hope this helps.

    (2003 SP2 here also, btw)

    Kev : This is certainly possible, since we just transferred from our old server. But everywhere I've looked I've already removed references to the old server. In DNS->(server)->Properties, checked every tab. The only remaining references are its Pointer and HOST(A) records in the reverse and forward lookup zones, but that wouldn't be enough to do anything, would it? (I left them there so I can still boot up the old server if I need to look at how something was configured.)
    From Buck
  • I'm also having this issue with 2 of my servers. The first server was just a file\Print server running 2003 R2. it was on a Dell 2900. orginaly I thought it was the hardware. so I rebuilt the server on another 2900 and moved the dataover. the problem was still happening. The way that I see the problem is to do a dir /s on a large volume. for me it's D: when the scrolling of files and folders lags is when everything else is lagged.

    For that server I ended up putting 2008 Sp2 on it and the problem went away. Now I'm hainvg the problem on a Hyper-V 2003 instance. The users report the lag when they do file operations on this server. There home drives are mapped here. They will click on P:\ and wait about 10 to 15 seconds

    Kev : I did happen to do a dir /s at one point...good call

Virtualisation for Disaster Recovery

Hi,

Can anyone give me any ideas/links so that I can better get an idea of how virtualisation can help me from a disaster recovery point of view?

We have a server sitting in a datacentre, it basically has a has a bunch of web services that sit on the internet and a big SQL Server database.

I'm not looking for anything massively detailed, just something to give me an idea of what's possible.

Many thanks

  • There are all manner of product-specific answers to this question but the most basic and easy explanation is that a physical server's identity/code/data/etc. are all kept on the disks plugged into the server itself whilst a virtual server's disks are actually just a big flat file on a disk.

    And if you can put this flat file onto a shared disk system and then replicate that file to another shared disk system at a second site then you have an identical copy of everything that makes up that virtual server but somewhere else, somewhere it can be restarted and carry on as though nothing has happened.

    As I say there's lots of specific products to automate this or make it easier but essentially it's easier because your servers are just files - does that help? come back if you have any more questions.

    Geoff Wray : Thanks for the feedback it's very useful to get an idea of what can be done. Do you think there would be any performance issues running as a virtual server?
    Chopper3 : well yes, there's usually a 5-10% virtualisation overhead (more for IO intensive VMs such as DBs usually) but often people virtualise onto newer hardware so usually see and *increase* in performance over their older servers - but yes, on a like for like basis it'll be slightly slower.
    Geoff Wray : Thanks for the info
    From Chopper3
  • Here's an example: Our company web sites are hosted externally. As a result of issues with our previous host, which resulted in several days of down time, I now keep a replica of those sites on two machines. One is used for development and testing, so can at times vary from the live site a bit. The other is maintained as an exact replica. This second one is normally only powered up as required for re-replication.

    In the event of issues with the web host the machine with the exact replica can be powered up and brought online via a change to the DNS record. As we are a small company there's no way I could justify the expense of an extra server to cover the pretty small chance of it ever being required. Instead I use a virtual machine. It's not as powerful as I'd like but it is perfectly workable.

Joining machine to domain

Hi,

How can I join my workstation (my personal machine so not in a network) to the domain (Windows Server 2008 R2 is the host).

Thanks

  • What's the operating system?

    Non-business versions of Windows explicitly forbid joining a domain. Otherwise, it's just advanced properties of My Computer, and it's on the Computer Name tab.

    Farseeker : Well, that's not *quite* all there is to it. As mh said, you also need a username/password for a domain account that is permitted to join machines
    Ben : Yes agreed. I made the assumption the OP was the admin, but on reflection I think that maybe they're not. I would also want to know (as the admin of my network) if someone was just plugging stuff into it, as there are numerous IT policies they should be reminded of (like the one that says don't do that!). IT is so fundamental to so many businesses nowadays, that if your malfunctions and takes even just a part of my network down, the company potentially stands to lose millions. If that happens, I along with several of the senior managers would be gunning for you.
    From Ben
  • First thing is you talk to an admin and let them know that you want to do this. They may have a "no personal machines on the network" policy and you risk violating it, which could land you in a heap of bother.

    Assuming that you've a version of Windows that can join a domain, your domain may be configured to use a user account dedicated to joining machines (this is an MS recommendation). If so you'll need the user name and password of this account in order to join, which I guess your admin won't idly give you.

    From mh

Providing high availability and failover using MySQL on EC2

I would like to have a highly-available MySQL system, with automatic failover, running on Amazon EC2 instances.

The standard approach to solving this is problem Heartbeat + DRBD, but I've found a lot of posts suggesting DRBD doesn't work on EC2, though none saying exactly why. Obviously, a serial heartbeat or distinct network is out of the question in the virtualised environment. It would also be good to have the different servers be in different availability zones, but we're getting into a much harder problem there.

What are peoples' opinion on having a high uptime solution in "the cloud"?

Note: This question was asked before RDS with multi-AZ was announced, which is the nice automatic answer for today's modern IT professional. :)

  • I'd default to active/passive dual master replication using a floating VIP. (Heartbeat, OpenAIS, MMRM, or Pacemaker)

    I can't think of a reason why this isn't a good idea. Can you?

    MMRM

    crb : I'm not sure if EC2 will give me a floating IP, for one, although you might be able to fake it with elastic load balancing. Also, there is no facility for out-of-band communication for the heartbeat. I hope that these things can be worked around, which is the difficult part of this question.
    Warner : Ah, unfortunately, I cannot speak for EC2. If you can provide the constraints, I can likely provide recommendations from that point.
    From Warner
  • The do-it-yourself option would be to install MySQL to an EBS volume, use an elastic IP or dynamic DNS to switch which server you're pointing at on fail.

    You'll need an external server monitoring the heartbeat, which would then unmount the EBS volume, remount to your backup server, then either remap the IP or change the DNS. If you're worried about the filesystem itself, then you'll have to do lvm snapshotting or something to get copies of your data, and then you can back those up to S3 or an EBS volume as well.

    I like having the data on the EBS volume itself because you can grab EBS snapshots of it for backup without getting involved with the lvm stuff if that sounds scary to you.

    Also to note, Amazon has a Enterprise MySQL package which I haven't used, but is probably a better option. Their prices are usually pretty reasonable for support contracts.

  • I think you really want a multi-zone RDS setup which was recently added to AWS.

    Read more here: http://aws.typepad.com/aws/2010/05/amazon-rds-multi-az-deployment.html

    If you wouldn't ask about AWS, I'd suggest a setup including DRDB. This would make sure that both servers stay in sync all the time. But I'm almost 100% sure this isn't possible yet on AWS.

    Generally, I'd be careful about snapshotting and all that - it's not a silver bullet! It takes a good while on AWS. The instance storage itself is a) not fast at all and b) not persistent! Even with EBS it's not really fast and you still need to stop the i/o for a consistent snapshot.

    From Till

Why are there two backplane connectors inside a Dell poweredge r410 ?

Just received a new 1U server - a Dell Poweredge r410.

There are four hot-swap drive trays, which can accommodate either SAS or SATA drives.

However, the odd thing is, instead of a single SFF8087 connector connecting the SAS HBA to the four-drive backplane, there is a SFF8087 cable that splits into two, and connects to the backplane in two places.

This makes no sense - a single SFF8087 cable is capable of supporting four drives, and there are only four drive slots, so ... a plain old SFF8087 cable would be sufficient to connect the SAS HBA to the backplane.

So why split SFF8087 into two, with SFF8087 on each of the other (split) ends ?

The reason this is important is because I do not intend to use the Dell HBA ... I have added my own 4-port 3ware 9650SE card to the system, which also has a SFF8087 port on it. The trouble is two-fold:

  • The dell provided split SFF8087 cable is just short enough so that it cannot be connected to an add-in card

  • there is no such thing as a split SFF8087 cable for sale anywhere. 3ware doesn't make them, adaptec doesn't make them, they're not on amazon, etc. And this makes sense, of course, since there is no reason for that cable to exist - a single SFF8087 can handle all four drives.

So ... why did they do this ?

I am hoping the answer has something to do with unified vs. split drive arrangements, and that somehow this lets you tie one controller to half of the drives and another controller to the other half of the drives and that is why two SFF8087 connectors are on the backplane ... this would also suggest that if I had a plain old SFF8087 cable lying around, I could connect the 3ware directly to the first backplane port and I would see all four drives...

I'll know in a day or so when one arrives in the mail ...

  • Almost certainly it's because that backplane is an evolution, rather than a new design of, the old ATA backplane that could only support two drives per channel. I work very closely with HP's server designers and whilst they're always coming up with long lists of great new tech and optimisations they could implement they're actually quite limited in how quickly they can introduce new aspects to their designs. I know it sounds stupid but I really wouldn't put it past their controllers to somehow only support a master/slave system over a much more capable SAS/SATA cable spec. Hope this helps and I'm more than happy for someone to put me right but chances are it's just that dull/old-school :)

    Zypher : I don't talk to many Dell Server designer, but if you look at the Tech Guidebook, pg 22 Figure 27 each connector only controls two SAS drives, which seems to support your theory. linky: http://www.dell.com/downloads/global/products/pedge/en/server-poweredge-r410-technical-guide-book.pdf
    From Chopper3

Bypassing VLAN with known MAC address

I am evaluating subneting our network with a Layer 2 switch and VLAN. From what I know, VLAN only works on broadcast domain, and if I know the MAC address of a remote computer on the same switch, I can bypass the VLAN security entirely by mapping the MAC address to my own ARP table. Is that correct?

Thanks

  • No, it isn't. This may have been possible in some of the earliest implementations of VLANs (20 years ago...) but on any modern switch, once a port is tagged with an 802.1q VLAN, that's it. The switching engine won't allow VLAN hopping. Of course, if you have an insecure configuration (say, a host with interfaces on more than one network, with IP forwarding enabled...) you could have some security issues.

    I work at a rather large university (we have two Class B's, and still need most of a Class A for NATted clients). Our network is run on Cisco, Foundry, and Juniper hardware, and everything is VLANed. We've never had any issues with it, security or otherwise.

  • You are not correct. When a switch creates a VLAN, it is effectively the same as if you created two separate networks connected with their own switches. A person can no more bypass the VLAN using a direct MAC address than you could gain access to your neighbor across the street if you knew his MAC address.
    Think of it as two physically separated networks.

    jackbean : Can you clarify a little further. My understanding has been that VLAN tagging only affects broadcast packets. Cisco defines VLAN as "a broadcast domain within a switched network." It also mentions, "VLANs improves performance and security in the switched network by controlling broadcast propagation." According to: http://www.ciscopress.com/articles/article.asp?p=102157 Thanks
    Scott Lundberg : VLAN tagging affects all packets that are tagged, regardless of whether they are broadcast or not. It improves performance because inherently a broadcast on VLAN 1 will not be passed to VLAN2 because the broadcast packet has been tagged with VLAN=1. This is an Ethernet (layer 2) property, so broadcast and unicast packets perform equally. Look at http://en.wikipedia.org/wiki/IEEE_802.1Q and notice that the VLAN is actually inserted into the EII frame. This applies for all packets, not just broadcast packets.
    Scott Lundberg : Concerning Cisco's definition, it means a broadcast domain includes all devices that would receive the broadcast. I.e. any device on the same VLAN, but that doesn't mean VLANs only apply to broadcast packets.
    jackbean : Thanks for the clarification.
  • There are some techniques to bypass VLAN tagging, but they only apply for some switches and in some configurations. If you have Cisco switches that have VLAN 1 on a trunk, you can send packets to machines in another VLAN (but not get anything back) if you send a .1q-encapsulated frame with the target VLAN as the VLAN tag.

    From Vatine