blob: f1f427dcde2328fbd0b3b4dc772da4a8acc34cc5 [file] [log] [blame]
Nikolay Pliashechnykov316823e2020-08-14 15:44:50 +01001VM tracking tool
2
3Q: What does it do?
4A: Searches for VMs that are duplicated (VMs with same IDs on different hypervisors), VMs that are misplaced (running on a different hypervisor that what Nova expects), VMs that are lost (existing in libvirt not having an uuid)
5
6Q: How does it work?
7A: By comparing the output of Nova (nova list --all) and virsh (virsh list --all, virsh list --uuid)
8
9Q: How do I use it?
10A: run "collect_data.sh" to gather the data from Nova and libvirt, then run "analyze.py" to get the results.
11
12Q: What does it need to run?
13A: Salt access, bash on the compute node, and a correct hypervisor name pattern set in the analyze.py (check comments in the source before running it).
Alexe286dbd2020-10-16 12:47:54 -050014
15Q: What is the typical flow to use
16A: On a salt node:
17 - Create isolated folder for the activity, say
18 export cvpoperator=$(pwd | cut -d'/' -f3)
19 mkdir /home/${cvpoperator}/compute_orphans/
20 Copy the scripts to this folder:
21 pushd /home/${cvpoperator}/compute_orphans/; cp /home/${cvpoperator}/cvp-configuration/scripts/vm_tracker/* .
22 Or use the cloned repo one, if your repo is available locally
23 - Run 'bash collect_data.sh'. Check for errors, if any.
24 - Run 'bash gen_del_vms.sh'. Check for errors if any.
25 - Review VMs found. Consider discussing findings with the manager and the client (!)
26 - Use 'bash vm_del.sh <cmp_node> <instance-XXXX>' command to remove VMs
27
28Q: I do not want to run anything until I know what will happen
29A: See examples in the corresponding folder: /home/${cvpoperator}/cvp-configuration/scripts/vm_tracker/examples