Energy consumption has become a great deal for cloud service providers due to financial as well as environmental concerns. Studies show that cloud servers operate, most of the time, at only between 10% and 50% of their maximal utilizations. These same studies also show that servers that are kept ON but are idle or lightly utilized consume significant amounts of energy. In this thesis, we develop new resource management techniques that enable automated, energy-efficient allocation of cloud data center resources, thereby reducing energy consumption and hence cutting down on the electricity bills of service providers. Specifically, our developed techniques consist of the following four complimentary frameworks:1. Workload Prediction Framework. It predicts the number of Virtual Machine (VM) requests along with their amounts of CPU and memory resources using k-Means clustering and adaptive Wiener filter prediction techniques. This proposed prediction framework provides accurate estimations of the number of needed servers, thus reducing energy consumption by putting to sleep unneeded servers.2. Resource Scheduling Framework. It reduces the time during which servers are kept ON by placing VMs with similar completion times on the same server while allocating more resources to the VMs that need more time to accomplish their tasks. The framework also allocates more resources for the delay-sensitive tasks whose charging cost is dependent on how fast they accomplish so that they finish earlier which generates higher revenues. This is all done by solving a convex optimization problem that guarantees that all the scheduled tasks meet their hard deadlines.3. Resource Overcommitment Framework. It consolidates data center workloads on as few ON servers as possible by assigning VMs to a server in excess of its real capacity, anticipating that each assigned VM will only utilize a part of its requested resources. The framework first determines the amount of server resources that can be overcommitted by predicting the future resource demands of admitted VMs. It then handles server overloads by predicting them before occurring and migrating VMs from overloaded servers to other under-utilized or idle servers whenever an overload occurs or is predicted to occur.4. Peak Shaving Control Framework. It spreads the data center's power demands more evenly over the entire billing cycle by making smart workload shifting and energy storage decisions which leads into great monetary reductions in the grid's peak demand penalties. The framework accounts for real energy storage losses and constraints, and considers a heterogeneous cloud workload that is made up of multiple classes, with each class having different delay tolerance and price.Several experiments based on real traces from a Google cluster show that our proposed frameworks achieve significant utilization gains, energy reductions, and monetary savings when compared to state-of-the-art cloud resource management techniques.