SUBSCRIBE via RSS

HOW-TO: Execute the evasi0n iOS7 Jailbreak

The long awaited jailbreak for iOS7 is out. If you have held back from upgrades due to the missing ability to jailbreak, then you'd probably jump the gun and download the jailbreak. Just to warn iPhone5 users out there, if you own a 64-bit phone (iPhone 5s and iPhone 5c) the current release of the evasi0n7 jailbreak will not work. You just have to wait a bit more for the 64-bit mobile substrate to be available then execute the jailbreak.

First of all, you need to download a copy of the evasi0n7 jailbreak. And, if you don't have a recent backup of your phone, hook it up to iTunes and do a backup. Be aware as this is more mandatory than optional. But if you want to risk it, you may proceed without one. It's your phone afterall.

If you are restoring to iOS7 from iOS6 or even iOS5, the backup is required. It will wipe your phone!

For me, I upgraded to iOS 7.0.4 from iOS 5.1.1. This was Apple closing its doors to any restores to iOS6 when they released iOS7. But it's not too late. I have an iPhone4S so this 32-bit phone was safe to "hack" using evasi0n7 jailbreak. I used the v1.0.2 from their website to jailbreak my iPhone4S.

Having the requirements above, upgrade to iOS 7.0.4 via iTunes. Jailbreak on "over the air upgrades" have been reported to fail so I didn't want to experience what others already did.

(I'm assuming that since you we're able to upgrade to iOS 7.0.4 -- like I did -- that you have an internet connection. The evasi0n7 jailbreak needs to connect to the internet to download some binaries.)

Once iOS7 is installed on the device, fireup the evasi0n7 jailbreak executable and follow the on-screen instructions. It will look similar to this:

Uploading Jailbreak Data

Uploading evasi0n App

The device will reboot three times -- first to inject the jailbreak, next after executing the evasi0n7 application and lastly after Cydia upacks.

Rebooting Device

All these take less than 30minutes to execute.

Lastly, restore your phone's latest backup after the jailbreak. Enjoy.

ERROR: This Host or Cluster is not Valid Selection

Title of this article is an error message encountered whenever a template gets converted back or deployed to a Virtual Machine. This is on the VMware side of things, of course. Deploying templates is the fastest way to commission a virtual machine for whatever purpose it may serve. Compared to the conventional way of deploying bare-metal machines which could take hours, virtual machine templates can deploy in a matter of minutes, sometimes seconds on top tier hardware. However, depending on how the templates got built out, you will soon discover they may not deploy back as virtual machines.

One of those situations is described by the error on this article. Whenever you deploy or convert a template back to virtual machine, the error ""This Host or Cluster is not Valid Selection" is encountered. The error in itself doesn't really give you a clue what mechanism is involved which caused it. In the VMware world, you will be getting this screenshot.

Error Message

This error is actually caused by a missing resource on the host server. It could be an ISO on a datastore that was used by the virtual machine prior to its conversion to a template. This is more often the likely cause of the error. I used to throw the template away since I didn't know what else to do. But as you will soon see, that goes back to the conventional way of deploying bare-metal machines. The advantage of using templates on the virtual world is defeated.

(DISCLAIMER: Read and understand the procedure first prior to execution. I will not be responsible for the outcome of your actions.)

First, remove the template from inventory. Just remove from inventory, do not remove from disk. Otherwise, the template could no longer be imported back to the virtual infrastructure.

Then, open a terminal session to the ESXi host and look for the files related to the template. This is where a backdoor to the ESXi host comes in really handy.

Locate the datastore where the template is stored. The virtual machine template files are usually stored in a directory under the same name as the original virtual machine prior to conversion. The file with extension .vmtx is the file you are after. Do a listing of the contents of the directory and if you see a file with .vmtx on the filename, then you are on the right track.

Use an editor to view and edit this .vmtx template file. For brevity, I downloaded the .vmtx file to my notebook and opened it on notepad. The screenshot is below:

Edit Template File

The lines to remove are related to the virtual CD-ROM (resource ide1:0, as shown in the picture). Create a backup of the .vmtx file. Remove the three lines with ide1:0 from the .vmtx file. Then, import the template back to the virtual infrastructure. In the VMware lingo, this action is to "Add to Inventory".

If this procedure is done right, Virtual Machines could be deployed from the template without the earlier errors encountered. This was what I did and the templates I have are again usable.


ERROR: YUM -- Unfinished Transactions Remaining..

Linux is not perfect. Chances are when installing packages from repositories, some error will show up -- more commonly the conflict between packages. This prevents the install or update of the packages due to the conflicts. On rare occassions, there would be times when yum may fail to complete transactions. This is the time when you will see a similar error message as below:

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

.. or this screenshot with the error message..

yum-complete-transaction

Sometimes a modified version of the message appears in this manner:

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
The program yum-complete-transaction is found in the yum-utils package.

This latter message is a little bit more meaningful in that it gives you a clue that the executable "yum-complete-transaction" is supplied by the yum-utils rpm package. Install this rpm if required.

WARNING: Although yum itself is suggesting to run "yum-complete-transaction", do not run it. If you could avoid running it, do so. Running "yum-complete-transaction" without knowing what the previous transaction was may wipe out your whole system. It could be disastrous.

I know it is very inconvinient to have the error message show up in the yum transactions but if that is what's bothering you, take the safer mode and execute yum-complete-transaction with the argument "--cleanup-only". The exact command to execute is:

# yum-complete-transaction --cleanup-only

This way yum will never make any change to your system. Again, do not run yum-complete-transaction if you don't have a backup of your system. You may end up wiping the contents of your drive. I have seen this happen several times that I have learned my lesson -- from both my mistakes and the mistakes of others.

TIP: Taking Advantage of Virtual Hot Add Technology

As a system administrator, I could say virtualization is one of the best things that happened since sliced bread. This of course is in terms of utilization and the thin provisioning mechanism it entails. What usually happens is that, the guest operating system is tricked into believing that it has all that resource for itself; whereas, what actually happens is that it only is allocated a slice of it, and it will get the resource as necessary.

This is neat technology. The other side of the coin is that you may allocate a thick provisioned resource but at a fraction of its entirety. What does this mean? The short term -- skimping -- I use this when I can't estimate the exact requirement of an application. I rely on the "hot add" technology of virtualization to allocate the final pieces of the architecture when more accurate information is available. In the virtualization lingo, the terms "hot add" and "hot plug" are often used interchangably and likely pertain to the same thing.

Modern operating systems play a big role in this "hot add" technology. For example, in the Linux world once the application has been tested and it was later found to be CPU-bound, just hot add more cores. Not really that interesting, right? But wait, you can do this without having to reboot the virtual machine -- NO DOWNTIME! Adding virtual CPUs while it is running -- now that's cool stuff! This of course assumes that the hot add feature has been turned on.

This is how its done on the VMware side of things.

Enable Hot Add

The same can be done to memory -- once a guest is identified to be memory-bound -- just hot add more memory.

A recent experience taught me that this doesn't always work. It works for CPU hot plug, but for memory hot add it doesn't. I'm using the recent version of RHEL 6.4 x86_64 and CentOS 6.4 x86_64. Both of these operating systems don't work well with memory hot add, but their older versions used to work.

Executing a memory hot add on a RHEL/CentOS 6.4 guest, this error appears:

------------[ cut here ]------------
WARNING: at arch/x86/mm/init_64.c:701 arch_add_memory+0xe2/0x100() (Not tainted)
Hardware name: VMware Virtual Platform
Modules linked in: vsock(U) vmci(U) autofs4 sunrpc ipv6 ipt_REJECT ppdev parport_pc parport microcode vmware_balloon e1000 shpchp sg i2c_piix4 i2c_core ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom mptsas mptscsih mptbase scsi_transport_sas pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: nf_conntrack]
Pid: 25, comm: kacpi_notify Not tainted 2.6.32-358.2.1.el6.x86_64 #1
Call Trace:
 [] ? warn_slowpath_common+0x87/0xc0
 [] ? warn_slowpath_null+0x1a/0x20
 [] ? arch_add_memory+0xe2/0x100
 [] ? add_memory+0xb7/0x1c0
 [] ? acpi_memory_enable_device+0x95/0x12b
 [] ? acpi_memory_device_add+0x118/0x121
 [] ? acpi_device_probe+0x50/0x122
 [] ? driver_probe_device+0xa0/0x2a0
 [] ? __device_attach+0x0/0x60
 [] ? __device_attach+0x0/0x60
 [] ? __device_attach+0x53/0x60
 [] ? bus_for_each_drv+0x64/0x90
 [] ? device_attach+0xa4/0xc0
 [] ? bus_probe_device+0x2d/0x50
 [] ? device_add+0x527/0x650
 [] ? pm_runtime_init+0xcb/0xe0
 [] ? device_register+0x1e/0x30
 [] ? acpi_add_single_object+0x837/0x9e8
 [] ? acpi_ut_release_mutex+0x63/0x67
 [] ? acpi_bus_check_add+0xe0/0x138
 [] ? acpi_os_execute_deferred+0x0/0x36
 [] ? acpi_bus_scan+0x3a/0x71
 [] ? acpi_bus_add+0x2a/0x2e
 [] ? acpi_memory_device_notify+0xa6/0x24f
 [] ? acpi_os_execute_deferred+0x0/0x36
 [] ? acpi_bus_get_device+0x2a/0x3e
 [] ? acpi_bus_notify+0x4b/0x82
 [] ? acpi_os_execute_deferred+0x0/0x36
 [] ? acpi_ev_notify_dispatch+0x64/0x71
 [] ? acpi_os_execute_deferred+0x29/0x36
 [] ? worker_thread+0x170/0x2a0
 [] ? autoremove_wake_function+0x0/0x40
 [] ? worker_thread+0x0/0x2a0
 [] ? kthread+0x96/0xa0
 [] ? child_rip+0xa/0x20
 [] ? kthread+0x0/0xa0
 [] ? child_rip+0x0/0x20
---[ end trace 081e3b980cb5c943 ]---
ACPI:memory_hp:add_memory failed
ACPI:memory_hp:Error in acpi_memory_enable_device
acpi_memhotplug: probe of PNP0C80:00 failed with error -22

 driver data not found
ACPI:memory_hp:Cannot find driver data

You will see the above error message when the command "dmesg" is executed on a terminal window. This seems to be a bug in the kernel module for memory hot add.

This is a bug that I hope gets fixed in future versions of the Linux kernel. If you rely on memory hot add feature, please be aware of this limitation on the new versions of RHEL and CentOS. It seems to be rooted to the kernel code so most likely all other flavors of Linux are in the same boat.

The Windows operating system doesn't seem to have this problem. Hot plug of virtual CPUs and hot add of memory works without any problem.

ERROR: TOO MANY USER/GDI objects are being used..

Multi-tasking is a function of the computer.. Humans (to a certain degree) are not capable of multi-tasking. It was handed a "busted" verdict when a subject was made to drive while being on the phone on an interview on the show Mythbusters. The subject miserably failed. Studies have also disclosed that people manifest severe interference when performing even the simplest of jobs at the same time. But I'm not about to introduce you to a subject of debate though.

My computer performs the multi-tasking for me -- that is what it is designed for. Given that, I expect it to perform the tasks I otherwise would not be able to complete by myself. It is a pain if the computer's technical limitations limit that possibility. Please allow me to share the experience and how I was able to tweak it.

Apparently, Windows has an inherent limitation on the so-called "user objects" and "GDI objects". This has something to do with how the operating system handles resources on demand. To give you an idea how I discovered this, it is due to this error on one of the applications I use to multitask.

TOO MANY USER/GDI objects are being used by applications!

Initially, I thought this was due to exhausting the 4GB RAM installed in my notebook. So I got me an additional 4GB module to beef it up. And the result? It is still the same. I really can't complain about having more memory but this intrigued me to investigate further.

I found a post related to the USER and GDI objects and how it is tested. It showed that the limit set by default numbers to 10,000. Is that huge? It appears so. But you can hit that wall easily -- even on Windows 7!

In order to illustrate, you may scour the internet to download sysinternals "testlimit.exe" or its 64bit equivalent if you're running a 64bit system. See the output below.

testlimit64.exe

The question then is, is it tweakable? I asked this same question and the short answer is "YES". It requires a registry hack though.

WARNING: Before you proceed any further, know that registry changes can potentially damage your computer system. I will not be held accountable and responsible for the outcome of this hack.

Now that you have been warned, the registry key is found in this branch: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows. The keys are "GDIProcessHandleQuota" and "USERProcessHandleQuota". These keys are set to their default value of "2710" (hexadecimal). This 2710 translates to 10,000 in native decimal form.

In my system, I changed these keys to 8000 (hexadecimal) or 32768 in human readable numeric terms. The testlimit output follows below.

testlimit64.exe

After the above tweaks in the registry, I have not encountered the same error in the past several weeks.

Further checks to the value of these keys indicate that the theoretical maximum is 10,000 (hexadecimal) or 65536 with huge, bold font warnings about hitting maximum may cause system instability. However, I have proven to myself that the value I set above works for me. Your mileage may vary.