DVB-Cube <<< Das deutsche PC und DVB-Forum >>>

PC-Ecke => # Virtualisierung / PC-Emulation => Thema gestartet von: SiLæncer am 03 Dezember, 2010, 20:53

Titel: Proxmox / ArchivistaBox
Beitrag von: SiLæncer am 03 Dezember, 2010, 20:53
Mit der ArchivistaBox (http://www.archivista.ch/de/) 64 Bit steht eine quelloffene Virtualisierungsplattform mit KVM zur Verfügung, die nach Angaben der Entwickler auf jedem handelsüblichen Rechner in wenigen Minuten aufgesetzt werden kann.

(http://www.pro-linux.de/images/NB3/imgdb/o_archivis_small-91archivista-logo93-1.jpg)
Kernstück der Veröffentlichung bildet der neue Installer (http://www.archivista.ch/de/pages/aktuell-blog/archivistavm-2.0-next-generation.php), der direkt im RAM abläuft und die Software in extrem kurzer Zeit aufspielt. Das System wurde bereits vor kurzem auf dem LinuxDay in Dornbirn vorgestellt, wo es in ca. 40 Sekunden auf einem Testrechner aufgespielt werden konnte. Das Vortragsskript (PDF (http://www.archivista.ch/archivistabox64bit.pdf)) dazu ist verfügbar.

Die ArchivistaBox 64 Bit enthält neu die die Virtualisierungsplattform ArchivistaVM. Das Projekt ArchivistaVM wurde vor ca. 15 Monaten als Fork zum bekannten Projekt Proxmox gestartet. Im Unterschied zu Proxmox konzentriert sich die ArchivistaBox auf Server-Virtualisierung mit KVM.

Die ArchivistaBox ist leicht zu installieren und aktuell zu halten. Sie kann von CD oder USB-Stick installiert werden. Spätere Updates benötigen kein Voll-Backup, da die virtualisierten Instanzen anders als bei Proxmox nicht überschrieben werden.

Die virtualisierten Instanzen können über einen Web-Browser oder über eine grafische Oberfläche verwaltet werden. Letztere ist über einen X-Server mit dem Window-Manager Fluxbox realisiert, der auf dem Server läuft.

Der neuen Version (http://www.archivista.ch/de/pages/aktuell-blog/archivistabox-64bit.php) wurde ein aktueller Linux-Kernel 2.6.35.9 spendiert, in dem alle Netzwerktreiber einschließlich derer aus dem Staging-Bereich verfügbar sind. Darunter sind bereits viele optische Karten, für die es erst seit kurzer Zeit Treiber für Linux gibt.

Die ISO-Datei von ArchivistaVM Bit umfasst derzeit ca. 320 MB und steht unter der GPL frei zur Verfügung. Die größere ArchivistaBox 64 Bit passt mit 700 MB ebenfalls noch auf eine CD. Für Fragen zu den Produkten steht ein Support-Forum (http://help.archivista.ch/forum) zur Verfügung.

Quelle : www.pro-linux.de
Titel: Proxmox Virtual Environment 3.1 erschienen
Beitrag von: SiLæncer am 22 August, 2013, 14:06
Die Virtualisierungs-Plattform Proxmox VE wurde in Version 3.1 veröffentlicht. Neu in dieser Version sind unter anderem das SPICE-Protokoll, ein GlusterFS-Plugin und ein Unternehmens-Repositorium.

Proxmox Virtual Environment (Proxmox VE) ist eine freie Virtualisierungsplattform, die die Linux-Hypervisor-Technologie KVM und die virtuelle Container-Technologie OpenVZ miteinander kombiniert und die Bildung von hochverfügbaren Clustern ermöglicht. Proxmox VE ist unter der GNU Affero General Public License 3 (AGPL v3) lizenziert und wird maßgeblich von der Wiener Proxmox Server Solutions GmbH entwickelt.

Proxmox VE 3.1 enthält einige wesentliche Neuerungen gegenüber der vor knapp drei Monaten erschienenen Version 3.0. Eine davon ist das Unternehmens-Repositorium. Gab es bisher nur ein Repositorium für Software-Updates, das für alle Benutzer mit oder ohne Support-Abonnement gleichermaßen zugänglich war, so wurde dieses jetzt in zwei separate Repositorien aufgeteilt. Standardmäßig ist das Unternehmens-Repositorium eingestellt, das aber Benutzern mit Support-Verträgen vorbehalten bleibt. Wer Proxmox VE ohne Support nutzen will, kann das zweite Repositorium für Updates nutzen, dessen Pakete aber laut Hersteller nicht so intensiv getestet werden.

Der ganze Artikel (http://www.pro-linux.de/news/1/20159/proxmox-virtual-environment-31-erschienen.html)

Quelle : www.pro-linux.de
Titel: Proxmox VE 3.4 mit ZFS
Beitrag von: SiLæncer am 21 Februar, 2015, 10:10
Proxmox Server Solutions GmbH hat die Version 3.4 seiner Open-Source Servervirtualisierungslösung Proxmox Virtual Environment (VE) zum Download freigegeben. Herausragende Neuerungen sind das integrierte ZFS-Dateisystem, ein ZFS Storage Plug-in und Hotplug. Die neue Version basiert auf dem aktuellen Debian Wheezy 7.8.

Proxmox VE (Virtual Environment) 3.4 integriert das ZFS-Filesystem (OpenZFS), das unter anderem Dateisystem- und Logical Volume Manager-Funktionalität kombiniert. Der Installer des Systems erlaubt zudem ZFS, ext3 und ext4 als Root-Filesystem bereits während der Installation auswählen. Bei ZFS werden alle RAID-Level unterstützt. Dabei kann ZFS entweder als lokales Verzeichnis mit Unterstützung für alle Content-Speichertypen oder als zvol-Block-Storage, aktuell mit Unterstützung von KVM-Images im Raw-Format, genutzt werden.

Neu in der aktuellen Version von Proxmox ist auch das ZFS Storage Plug-in, das die Nutzung von einem lokal installierten ZFS-System unterstützt und Live-Snapshots und Rollbacks erlaubt. Auch platz- und leistungssparende verlinkte Vorlagen und Klone sind dabei möglich. Das neue Plugin ergänzt die bereits in Proxmox VE verfügbaren Plugins für ZFS für iSCSI, Ceph, GlusterFS, NFS, iSCSI und andere.

Mit der neuen Funktion »Hotplug« können ab sofort virtuelle Festplatten, Netzwerkkarten oder USBs während des laufenden Serverbetriebs installiert oder ausgetauscht werden. Bei allen anderen installierten virtuellen Hardware-Komponenten, welche Hotplug noch nicht ermöglichen, wird ab Proxmox VE 3.4 der Vermerk »anhängige Änderungen« im Web-GUI angelegt. Dies ermöglicht dem Administrator, den aktuellen Status seiner Änderungen im Blick zu behalten.

Proxmox Virtual Environment (Proxmox VE) ist eine freie Virtualisierungsplattform, die die Linux-Hypervisor-Technologie KVM und die virtuelle Container-Technologie OpenVZ miteinander kombiniert und die Bildung von hochverfügbaren Clustern ermöglicht. Die Lösung ist unter der GNU Affero General Public License 3 (AGPL v3) lizenziert und steht in der neuen Version als ISO-Image auf der Herstellerwebseite (http://www.proxmox.com/downloads) zur Verfügung. Für Unternehmenskunden bietet Proxmox Server Solutions verschiedene Support-Dienstleistungen ab 59,90 Euro pro Jahr und CPU an.

Quelle : www.pro-linux.de
Titel: Proxmox VE 4.1
Beitrag von: SiLæncer am 12 Dezember, 2015, 19:00
Changelog

based on Debian Jessie 8.2.0
Linux kernel 4.2.6
improved startup/shutdown behavior (systemd)
enable NTP by default
installer: allow up to 8 disks for ZFS
KVM: add qemu agent GUI option
KVM: update network boot ROMs
Improve HA GUI for users with restricted permissions
add Galician language to GUI
LXC: add rootfs resize to GUI
LXC: add support for Fedora 22 and Debian stretch/sid, Ubuntu 15.10
LXC: support unpriviledged containers (technology preview)
storage: added LVM thin support (technology preview)
Support for Turnkey Linux LXC appliances
added new pvereport command
countless bug fixes and package updates (for all details see bugtracker and GIT)

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 4.2
Beitrag von: SiLæncer am 28 April, 2016, 16:32
Whats new:>>

Most important improvements include:

    New GUI: Sencha Ext JS 6 framework with enhanced interactive features
    New LVM-thin and ZFS improvements help increase storage utilization
    ZFS-Plugin automatically configured out-of-the-box
    LXC containers with additional mount points
    Finally: Rate limit for network (containers)
    Improvements at Ceph GUI
    SSL certificates with Let's Encrypt

Read the forum announcement here: https://forum.proxmox.com/threads/proxmox-ve-4-2-released.27131/

http://www.proxmox.com/downloads
Titel: Proxmox VE 5.0 Beta 1
Beitrag von: SiLæncer am 27 März, 2017, 17:30
Release Notes

We are proud to announce the release of the first beta of our Proxmox VE 5.x family - based on the great Debian Stretch.

With the first beta we invite you to test your hardware and your upgrade path. The underlying Debian Stretch is already in a good shape and the 4.10 kernel performs outstandingly well. The 4.10 kernel for example allows running a Windows 2016 Hyper-V as a guest OS (nested virtualization).

This beta release provides already packages for Ceph Luminous v12.0.0.0 (dev), the basis for the next long-term Ceph release.

Whats next?

In the coming weeks we will integrate step by step new features into the beta, and we will fix all release critical bugs.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox 5.0
Beitrag von: SiLæncer am 05 Juli, 2017, 12:29
Release Notes

Based on Debian Stretch 9.0
Kernel 4.10.15
QEMU 2.9
LXC: update to 2.0.8
New asynchronous Storage Replication feature (needs ZFS, technology preview)
New/updated LXC templates (Debian, Ubuntu, CentOS, Fedora, OpenSUSE, Arch Linux, Gentoo and Alpine)
Updated/improved noVNC console]
Ceph v12.1.0 Luminous (technology preview), packaged by Proxmox
live migration with local storage
GUI improvements

    USB und Host PCI address visibility
    improved bulk and filtering options

Improved installation ISO
Importing Qemu/KVM_Virtual_Machines#_importing_virtual_machines_from_foreign_hypervisors
improved reference documentation with screenshots
countless bug fixes and package updates (for all details see bugtracker and GIT)

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 5.1
Beitrag von: SiLæncer am 25 Oktober, 2017, 18:30
Changelog

Based on Debian Stretch 9.2
Kernel 4.13.3
QEMU 2.9.1
LXC: update to 2.1
Ceph 12.2.1 (Luminous LTS, stable), packaged by Proxmox
ZFS 0.7.2
Improved reference documentation with screenshots
Countless bug fixes and package updates (for all details see bugtracker and GIT)

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 5.2
Beitrag von: SiLæncer am 17 Mai, 2018, 16:40
Changelog

Based on Debian Stretch 9.4
Kernel 4.15.17
QEMU 2.11.1
LXC 3.0.0
Ceph 12.2.5 (Luminous LTS, stable), packaged by Proxmox
ZFS 0.7.8
Cloudinit GUI support
Cluster create/join nodes via GUI
Certificate management including Let´s Encrypt GUI
SMB/CIFS Storage plugin (supports backups, images, templates, iso and containers)
Display IP for VM (using qemu-guest-agent)
LXC: templates and clones, move volume/disk
Create and edit new roles via GUI
I/O bandwith limits for restore operations (globally, per storage or per restore job)
new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
Basic/Advanced GUI
ebtables support
Improved reference documentation
Countless bug fixes and package updates (for all details see bugtracker and GIT)

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 5.3
Beitrag von: SiLæncer am 05 Dezember, 2018, 17:30
Release Notes

Proxmox Server Solutions GmbH today unveiled Proxmox VE 5.3, its latest open-source server virtualization management platform. Proxmox VE is based on Debian Stretch 9.6 with a modified Linux Kernel 4.15. Ceph Storage has been updated to version 12.2.8 (Luminous LTS, stable), and is packaged by Proxmox.

Proxmox VE and CephFS
Proxmox VE 5.3 now includes CephFS in its web-based management interface thus expanding its comprehensive list of already supported file and block storage types. CephFS is a distributed, POSIX-compliant file system and builds on the Ceph cluster. Like Ceph RBD (Rados Block Device), which is already integrated into Proxmox VE, CephFS now serves as an alternative interface to the Ceph storage. For CephFS Proxmox allows storing VZDump backup files, ISO images, and container templates. The distributed file system CephFS eliminates the need for external file storage such as NFS or Samba and thus helps reducing hardware cost and simplifies management.

The CephFS file system can be created and configured with just a few clicks in the Proxmox VE management interface. To deploy CephFS users need a working Ceph storage cluster and a Ceph Metadata Server (MDS) node, which can also be created in the Proxmox VE interface. The MDS daemon separates metadata and data from each other and stores them in the Ceph file system. At least one MDS is needed, but its recommended to deploy multiple MDS nodes to improve high availability and avoid SPOF. If several MDS nodes are created only one will be marked as ‘active’ while the others stay ‘passive’ until they are needed in case of failure of the active one.

Further Improvements in Proxmox VE 5.3
Proxmox VE 5.3 brings many improvements in storage management. Via the Disk management it is possible to easily add ZFS raid volumes, LVM, and LVMthin pools as well as additional simple disks with a traditional file system. The existing ZFS over iSCSI storage plug-in can now access LIO target in the Linux kernel. Nesting is enabled for LXC containers making it possible to use LXC or LXD inside a container. Also, access to NFS or CIFS/Samba server can be configured inside containers. For the keen and adventurous user, Proxmox VE brings a simplified configuration of PCI passthrough and virtual GPUs (vGPUs such as Intel KVMGT)–now even possible via the web GUI.

Countless bugfixes and smaller improvements are listed in the release notes and can be found in detail in the Proxmox bugtracker or in the Git repository.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 5.4
Beitrag von: SiLæncer am 12 April, 2019, 19:45
Release Notes

Installing Ceph via user interface with the new wizard – Integrated into the Proxmox VE software stack since 2014 the distributed storage technology Ceph comes with own packages and support from the Proxmox team. The configuration of a Ceph cluster has already been available via the web interface, now with Proxmox VE 5.4 the developers have brought the installation of Ceph from the command line to the user interface making it extremely fast and easy to setup and configure a hyper-converged Proxmox VE/Ceph cluster. Additionally, enterprise on a budget can use commodity off-the-shelf hardware allowing them to cut costs for their growing data storage demands.

Greater Flexibility with High Availability improvements – Proxmox VE 5.4 provides new options to set the HA policy data center-wide, changing the way how guests are treated upon a node shutdown or reboot. This brings greater flexibility and choice to the user. The policy choices are:

- Freeze: always freeze services—independently of the shutdown type (reboot, poweroff).
- Fail-over: never freeze services—this means a service will get recovered to another node if possible and if the current node doesn’t come back up in the grace period of one minute.
- Default: this is the current behavior—freeze on reboot but do not freeze on poweroff.
Suspend to disk/hibernation support for Qemu/KVM guests – With Proxmox VE 5.4 users can hibernate Qemu guests independent of the guest OS and have them resumed properly on the next restart. Hibernation saves the RAM contents and the internal state to permanent storage. This allows users to preserve the running state of their qemu-guests across most upgrades to and reboots of the PVE-node. Additionally it can speed up the startup of guests running complex workloads, and also workloads which need lots of resources at initial setup, but free them later on.
Security: Support for U2F Authentication – Proxmox VE 5.4 supports the U2F (Universal 2nd Factor) protocol which can be used in the web-based user interface as an additional method of two-step verification for users. The U2F is an open authentication standard and simplifies the two-factor authentication. Since it is required in certain domains and environments this is an important improvement to security practices. The new U2F authentication and the TOTP second factor authentication can be configured by each user by themselves without needing a ‘User.Modify’- permission.
Improved ISO installation wizard – The Proxmox VE ISO installation wizard has been optimized offering the ability to go back to a previous screen during the installation. Users can adapt their choices made, without the need to restart the complete installation process. Before the actual installation, a summary page will be displayed containing all relevant information.
Improved Qemu Guest creation wizard - As often requested by the Proxmox community some options like for example Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), or SCSI controller can now be selected directly in the VM creation wizard, and dependent options get set to sensible values directly.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 6.0
Beitrag von: SiLæncer am 17 Juli, 2019, 19:30
Release Notes

Neue Funktionen in Proxmox VE 6.0

    Ceph Nautilus (14.2) und verbesserte Ceph-Benutzeroberfläche: Mit einem Proxmox VE/Ceph-Cluster kann eine hyperkonvergente Infrastruktur (HCI) aufgesetzt und übersichtlich verwaltet werden. Proxmox VE 6.0 integriert die aktuellste Version Ceph 14.2.1 (Nautilus) und bringt zudem viele neue Funktionen in die web-basierte Verwaltungsoberfläche. So wird die Cluster-Übersicht für Ceph jetzt auch in der ‚Datacenter View‘ angezeigt; ein neues Donut-Diagramm visualisiert Aktivität und Zustand der Placement Groups (PGs); die Version aller Ceph-Services wird angezeigt was die Erkennung veralteter Services leichter macht; die Konfigurationseinstellungen des Config-Files und der Datenbank können angezeigt werden; eine neue Auswahl ermöglicht es Public- und Cluster-Netzwerke in der Web-Oberfläche auszuwählen; die Verschlüsselung von OSDs lässt sich mittels Checkbox gleich bei der Erstellung aktivieren.
    Cluster-Kommunikation mit Corosync 3 und Kronosnet (kNet): Proxmox VE 6.0 hat die Cluster-Kommunikation auf Corosync 3 aktualisiert wodurch das „on-the-wire“-Format gewechselt hat. Corosync verwendet Unicast als Standard Transport-Medium. Dies bietet eine bessere Failover-Kontrolle da Prioritäten für verschiedene Netzwerke vergeben werden können die bei Auftreten eines Ausfalls in Kraft treten. In der Web-Oberfläche steht ein neues Auswahl-Widget für das Netzwerk zur Verfügung das die Auswahl der richtigen Link-Adresse erleichtert und so hilft Tippfehler zu vermeiden.
    ZFS 0.8.1 mit Unterstützung für native Verschlüsselung und TRIM für SSDs: Mit der Aktualisierung auf die aktuellste ZFS-Version v0.8.1 lässt sich in Proxmox VE das Dateisystem nativ verschlüsseln. Die Verschlüsselung ist direkt in die `zfs`-Utility integriert und ermöglicht damit eine komfortable Schlüsselverwaltung. Proxmox VE 6.0 bringt auch die Unterstützung von TRIM. Mit dem Befehl `zpool trim` werden SSDs über ungenutzte Blöcke informiert. TRIM hilft so die Ressourcenverwendung zu verbessern und trägt zur SSD-Langlebigkeit bei. Weiters wurde die Unterstützung von Checkpoints auf Pool-Level hinzugefügt.
    Support für ZFS auf UEFI und auf NVMe: der Proxmox VE ISO-Installer unterstützt mit der neuen Version ‚ZFS root via UEFI‘; so kann zum Beispiel ein ZFS-Mirror auf NVMe SSDs gebootet werden. Durch die Verwendung von `systemd-boot` als Bootloader anstelle von Grub lassen sich alle Pool Level-Features auf dem Root-Pool aktivieren.
    QEMU 4.0.0: Anwender können mit Proxmox VE 6.0 die Web-Oberfläche nutzen um Hosts mit lokalem Speicher live zu migrieren; auch mehr CPU-Flags für virtuelle Maschinen können gesetzt werden. Unterstützung für Hyper-V Enlightenment wurde hinzugefügt womit sich die Performance von Windows in einer virtuellen Maschine unter Qemu/KVM verbessert.
    Benutzer-definierte Cloudinit-Konfiguration: Proxmox VE 6.0 bringt Unterstützung für benutzer-definierte Cloudinit-Konfigurationen, die sich als „Snippet“ auf einem Storage ablegen lassen. Mit dem Befehl `qm cloudinit dump` lässt sich die aktuelle Cloudinit-Konfiguration als Startpunkt für Erweiterungen nutzen.

Weitere Neuerungen in Proxmox VE 6.0

    Alte Kernel-Images werden automatisch aufgeräumt: die alten Images werden jetzt nicht mehr mit ‚NeverAutoRemove‘ versehen. So können Probleme, z.b. wenn /boot auf einer kleinen Partition gemounted wird vermieden werden.
    Statusanzeige für Gäste in der Baumansicht: Zusätzlicher Status für Gäste (Migration, Backup, Snapshot, gesperrt) wird direkt in der Baumübersicht angezeigt.
    Verbesserte ISO-Erkennung im Installer: die ISO-Erkennung im Installer wurde überarbeitet und mehr Geräte inkludiert wodurch die Erkennungsproblematik auf gewisser Hardware minimiert wird.
    Backup von Pool-Level: Backups eines ganzen Pool können erstellt werden. Wird ein Pool anstelle einer expliziten Liste von Hosts als Backup-Ziel ausgewählt, werden neue Pool-Mitglieder automatisch im Backup inkludiert und die entfernten Hosts automatisch ausgeschlossen.
    Der Authentifizierungsschlüssel wird alle 24 Stunden automatisch gewechselt: die Lebensdauer des Schlüssels wird auf 24 Stunden beschränkt um die Auswirkungen eines Schlüsselverlustes oder von mutwilligen Sicherheits-Verletzungen durch Personal zu reduzieren.
    Die Node-View in der Benutzeroberfläche bietet eine schnellere syslog-Ansicht.

Durch die Verwendung von Proxmox VE 6.0 als Open Source-Alternative zu proprietären Virtualisierungsmanagement-Lösungen können Unternehmen ihre IT-Infrastruktur zentralisieren und modernisieren und in ein flexibles und kosteneffizientes Software-definiertes Datencenter umbauen.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 6.1 veröffentlicht
Beitrag von: SiLæncer am 05 Dezember, 2019, 17:00
Release Notes

We are very excited to announce the general availability of Proxmox VE 6.1.

It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies.

This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it’s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F.

The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown.

In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1.

We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 6.2
Beitrag von: SiLæncer am 13 Mai, 2020, 16:30
Release Notes

Neue Funktionen in Proxmox VE 6.2

Neuerungen für die web-basierte Benutzeroberfläche:

    Zusätzlich zum bereits vorhandenen HTTP-basierten Validierungspfad für Let's Encrypt TLS-Zertifikate implementiert Proxmox VE jetzt auch die Domain-Validierung über den DNS-basierten Challenge-Mechanismus. Das Einrichten vertrauenswürdiger Zertifikate über das ACME-Protokoll ist somit einfacher.
    Volle Unterstützung für bis zu acht Corosync-Links für einen Cluster. Die Verwendung mehrerer Netzwerke erhöht die Cluster-Verfügbarkeit.
    In der Ansicht ‚Storage-Inhalt‘ können IT-Administratoren die gespeicherten Daten mit der Spalte 'Erstellungsdatum' filtern, wodurch beispielsweise die Suche nach einem Backup von einem bestimmten Datum vereinfacht wird.
    Die Sprache der Benutzeroberfläche kann geändert werden ohne dass ein Neustart der Sitzung nötig ist. Darüber hinaus wurde eine arabische Übersetzung hinzugefügt womit Proxmox VE insgesamt in 20 Sprachen verfügbar ist.

Linux-Container:

    Die integrierte Containertechnologie wurde auf LXC 4.0.2 und lxcfs 4.0.3 aktualisiert. Proxmox VE 6.2 erlaubt die Erstellung von Templates für Container auf verzeichnisbasiertem Storage.
    Neue LXC-Templates für Ubuntu 20.04, Fedora 32, CentOS 8.1, Alpine Linux und Arch Linux sind verfügbar.

Zstandard for Backup/Restore:

    Der integrierte Proxmox VE Backup-Manager unterstützt Zstandard (zstd), den hocheffizienten und schnellen, verlustfreien Datenkompressions-Algorithmus.

Benutzer- und Berechtigungsverwaltung:

    Proxmox VE verwendet eine rollenbasierte Benutzer- und Berechtigungsverwaltung für alle Objekte wie VMs, Storage, Nodes, etc.. Die neue Funktion ‚LDAP sync‘ ermöglicht es Benutzer und Gruppen in die Proxmox User- und Gruppen Rechteverwaltung zu synchronisieren.
    Mit der Unterstützung von API-Tokens erlaubt Proxmox VE anderen Systemen oder Clients Zugriff auf den größten Teil der REST-API. Diese Tokens können für einzelne Benutzer generiert werden; optional können separate Berechtigungen und Ablaufdaten konfiguriert werden, um den Umfang und die Dauer des Zugriffs zu begrenzen. Sollte das API-Token kompromittiert werden, kann es widerrufen werden, ohne dass der Benutzer selbst deaktiviert werden muss.

Weitere Neuerungen

    QEMU/KVM: Unterstützung für Live-Migration mit replizierten Disks (Storage Replikation mit ZFS) wurde hinzugefügt.
    Durch die Verbesserung des Deinstallationsprozesses von Ceph ist das Testen der Ceph-Storagetechnologie noch einfacher mit Proxmox VE.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 7.0
Beitrag von: SiLæncer am 07 Juli, 2021, 21:30
Release Notes

    Based on Debian Bullseye (11)
    Ceph Pacific 16.2 as new default
    Ceph Octopus 15.2 continued support
    Kernel 5.11 default
    LXC 4.0
    QEMU 6.0
    ZFS 2.0.4

Changelog Overview

    Installer:
        Rework the installer environment to use switch_root instead of chroot, when transitioning from initrd to the actual installer.

            This improves module and firmware loading, and slightly reduces memory usage during installation.

        Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
        Improve ISO detection:
            Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
            Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
        Use zstd compression for the initrd image and the squashfs images.
        Setup Btrfs as root file system through the Proxmox VE Installer (Technology preview)
        Update to busybox 1.33.1 as the core-utils provider.

    Enhancements in the web interface (GUI):
        The node summary panel shows a high level status overview, while the separate Repository panel shows in-depth status and list of all configured repositories. Basic repository management, for example, activating or deactivating a repository, is also supported.
        Notes panels for Guests and Nodes can now interpret Markdown and render it as HTML.
        On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
        The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
        Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
        Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
        Improved translations, among others:
            Arabic
            French
            German
            Japan
            Polish
            Turkish

    Access Control:
        Single-Sign-On (SSO) with the new OpenID Connect access realm type.

        You can integrate external authorization servers, either using existing public services or your own identity and access management solution, for example, Keycloack or LemonLDAP::NG.

        Added new permission Pool.Audit to allow users to see pools, without permitting them to change the pool.

        See breaking changes below for some possible impact in custom created roles.

    Virtual Machines (KVM/QEMU):
        QEMU 6.0 has support for io_uring as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.

        The new default can be overridden in the guest config via qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native (where, for example, DRIVE would be scsi0 and the OPTS could be get from qm config VMID output).

        EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
        Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
            This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
            Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.

        With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.

        VM snapshot states are now always removed when a VM gets destroyed.
        Improved logging during live restore.

    Container
        Support for containers on custom storages.
        Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.

    Migration
        QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.

        Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.

        Containers: The force parameter to pct migrate, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as shared.

    High Availability (HA):
        Release LRM locks and disable watchdog protection if all services of the node the LRM is running on, got removed and no new ones were added for over 10 minutes.

        This reduced the possible subtle impact of an active watchdog after a node was cleared of HA services, for example, when HA services were previously only configured for evaluation.

        Add a new HA service state recovery and transform the fence state in a transition to that new state.

        This gains a clear distinction between to be fenced services and the services whose node already got fenced and are now awaiting recovery.

        Continuously retry recovery, even if no suitable node was found.

        This improves recovery for services in restricted HA groups, as only with that the possibility of a quorate and working partition but no available new node for a specific service exists.
        For example, if HA is used for ensuring that a HA service using local resource, like a VM using local storage, will be restarted and up as long as the node is running.

        Allow manually disabling HA service that currently are in recovery state, for more admin control in those situations.

    Backup and Restore
        Backups of QEMU guests now support encryption using a master key.
        It is now possible to back up VM templates with SATA and IDE disks.
        The maxfiles parameter has been deprecated in favor of the more flexible prune-options.
        vzdump now defaults to keeping all backups, instead of keeping only the latest one.
        Caching during live restore got reworked, reducing total restore time required and improving time to fully booted guest both significantly.
        Support file-restore for VMs using ZFS or LVM for one, or more, storages in the guest OS.

    Network:
        Default to the modern ifupdown2 for new installations using the Proxmox VE official ISO. The legacy ifupdown is still supported in Proxmox VE 7, but may be deprecated in a future major release.

    Time Synchronization:
        Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.

        If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.

    Ceph Server
        Support for Ceph 16.2 Pacific
        Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple public_networks defined.

        Note that multiple public_networks are usually not needed, but in certain deployments, you might need to have monitors in different network segments.

        Improved support for IPv6 and mixed setups, when creating a Ceph monitor.
        Beginning with Ceph 16.2 Pacific, the balancer module is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs.
        Newly created Bluestore OSDs will benefit from the newly enabled sharding configuration for rocksdb, which should lead to better caching of frequently read metadata and less space needed during compaction.

    Storage
        Support for Btrfs as technology preview
            Add an existing Btrfs file system as storage to Proxmox VE, using it for virtual machines, container, as backup target or to store and server ISO and container appliance images.
        The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit[1].
        More use of content-type checks instead of checking a hard-coded storage-type list in various places.
        Support downloading ISO and Cont appliance images directly from a URL to a storage, including optional checksum verifications.

    Disk Management
        Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential.

        Note that with using this feature any data on the disk will be destroyed permanently.

    pve-zsync
        Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.

    Firewall
        The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations).

    Certificate management
        The ACME standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface.

Breaking Changes

    Pool permissions

    The old permission Pool.Allocate now only allows users to edit pools, not to see them. Therefore, Pool.Audit must be added to existing custom roles with the old Pool.Allocate to preserve the same behavior. All built-in roles are updated automatically.

    VZDump
        Hookscript: The TARFILE environment variable was deprecated in Proxmox VE 6, in favor of TARGET. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
        The size parameter of vzdump has been deprecated, and setting it is now an error.

    API deprecations, moves and removals
        The upgrade parameter of the /nodes/{node}/(spiceshell|vncshell|termproxy) API method has been replaced by providing upgrade as cmd parameter.
        The /nodes/{node}/cpu API method has been moved to /nodes/{node}/capabilities/qemu/cpu
        The /nodes/{node}/ceph/disks API method has been replaced by /nodes/{node}/disks/list
        The /nodes/{node}/ceph/flags API method has been moved to /cluster/ceph/flags
        The db_size and wal_size parameters of the /nodes/{node}/ceph/osd API method have been renamed to db_dev_size and wal_dev_size respectively.
        The /nodes/<node>/scan/usb API method has been moved to /nodes/<node>/hardware/usb

    CIFS credentials have been stored in the namespaced /etc/pve/priv/storage/<storage>.pw instead of /etc/pve/<storage>.cred since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.

    qm|pct status <VMID> --verbose, and the respective status API call, only include the template line if the guest is a template, instead of outputting template: for guests which are not templates.

Known Issues

    Network: Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
        Some may change their name. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.

            We observed such changes with high-speed Mellanox models.

        Bridge MAC address selection has changed in Debian Bullseye - it is now generated based on the interface name and the machine-id (5) of the system.

        Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.

    If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.

    Container:
        cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2 [1], as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).

        CentOS 7 and Ubuntu 16.10 are two prominent examples for Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see:

https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 7.2
Beitrag von: SiLæncer am 22 Juni, 2022, 18:20
Changelog


    Backup/Restore:

        Notes templates: Meta-information can be added via a notes-template for backup jobs, to better distinguish and search for backups. This template is evaluated as soon as the job is executed, and added to any resulting backup. Notes templates can contain template variables like {{guestname}} or {{cluster}}.
        To benefit from the Rust code of Proxmox Backup Server, the Proxmox developers make use of perlmod, a Rust crate which allows exporting Rust modules as Perl packages. perlmod is used by Proxmox to transfer data between Rust and Perl, thus implementing parts of Proxmox VE and Proxmox Mail Gateway in Rust.
        The next-event scheduling code was updated via this Perl-to-Rust-binding (perlmod) and now uses the same code as Proxmox Backup Server. Users can not only specify the existing weekday, time, and time range, but now also a specific date and time (e.g., *-12-31 23:50; New Year's Eve, 10 minutes before midnight every year), date ranges (e.g., Sat *-1..7 15:00; first Saturday every month at 15:00), or repeating ranges (e.g., Sat *-1..7 */30; first Saturday every month, every half hour).
        Some basic restore settings, for example guest name or memory, can now be overwritten in the enhanced backup-restore dialog in the web interface.
        A new ‘job-init’ hook step was added to the backup process. Among other things, it can be used to prepare the backup storage, for example, by starting the storage server.

    High Availability Manager:

        By improving the local resource manager (pve-ha-lrm) scheduler which launches workers, the amount of configurable services that can be handled per single node has increased. This helps in large deployments, as the services at the end of the queue are also checked to ensure that they are still in the target state.
        By introducing a skip-round command to the integrated HA simulator in Proxmox VE, it has become easier to test races in scheduling (on the different nodes).
    Cluster: Regarding the creation of new VMs or containers, version 7.2 allows you to configure a desired range from which the new VMIDs are proposed via the web interface. The lower and upper boundaries can be set in the Datacenter -> Options panel. Setting lower equal to upper disables auto-suggestion completely, meaning the administrator has to manually enter an ID.
    Ceph: Proxmox VE supports Ceph Pacific 16.2.7 and Ceph Octopus 15.2.16 (with continued support until mid 2022). This version now also supports creating and destroying erasure-coded pools, which can be added as Proxmox VE storage entries, and help to reduce the amount of disk space required. A new option in the GUI allows for passing the keyring secrets of external Ceph clusters when adding an RBD or CephFS storage to Proxmox VE.
    Web interface: Further enhancements in the web interface allow for example for safe reassignment of a VM disk or CT volume to another guest on the same node; the reassigned disk/volume can be attached at a different bus/mountpoint on the destination guest. This can help in cases of upgrades, restructuring, or after disaster recovery.
    Management: Many improvements in Proxmox VE 7.2 enable even more convenient management of the system. For example, a particular kernel version can be selected to boot persistently from a running system, through ‘proxmox-boot-tool kernel pin’. The selection can be used either indefinitely or just for the next boot. This eliminates the need to watch the boot process to select the desired kernel version in the bootloader screen.

Further enhancements and Bug fixes

    In the installation ISO, ZFS installs can be configured to use various compression algorithms (e.g., zstd, gzip, etc.).Additionally, the memtest86+ package, a tool aimed at memory failure detection, has been updated to the completely rewritten 6.0b.
    Further improvements have been added to virtual machines (KVM/QEMU); one to highlight is support for the accelerated virtio-gl (VirGL) display driver. For VirtIO and VirGL display types, SPICE is enabled by default. In modern Linux distributions, changing the graphics card to VirGL can significantly increase frames per second (FPS). For Proxmox containers (LXC), many templates have also been refreshed or newly added, such as the NixOS container template.
    The Proxmox VE Android app now provides a simple dark theme and enables it if the system settings are configured to use dark designs. The mobile app also provides an inline console by relaying noVNC for VMs, and xterm.js for containers and the Proxmox VE node shell in the GUI.
    To prevent a network outage during the transition from ifupdown to ifupdown2, the ifupdown package was modified to not stop networking upon its removal.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 7.4
Beitrag von: SiLæncer am 24 März, 2023, 18:20
Changelog


    Based on Debian Bullseye (11.6)
    Latest 5.15 Kernel as stable default
    Newer 6.2 kernel as opt-in
    QEMU 7.2
    LXC 5.0.2
    ZFS 2.1.9
    Ceph Quincy 17.2.5
    Ceph Pacific 16.2.11

Highlights

    Proxmox VE now provides a dark theme for the web interface.
    Guests in resource tree can now be sorted by their name, not only VMID.
    The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
    Added CRM command to the HA manager to switch an online node manually into maintenance mode (without reboot.

Changelog Overview
Enhancements in the web interface (GUI)

    Add a fully-integrated "Proxmox Dark" color theme variant of the long-time Crisp light theme.

    By default, the prefers-color-scheme media query from the Browser/OS will be used to decide the default color scheme.
    Users can override the theme via a newly added Color Theme menu in the user menu.

    Add "Proxmox Dark" color theme to the Proxmox VE reference documentation.

    The prefers-color-scheme media query from the Browser/OS will be used to decide if the light or dark color scheme should be used.
    The new dark theme is also available in the Proxmox VE API Viewer.

    Local storage types that are located on other cluster nodes can be added.

    A node selector was added to the Add Storage wizard for the ZFS, LVM, and LVM-Thin storage types.

    Automatically redirect HTTP requests to HTTPS for convenience.

    This avoids "Connection reset" browser errors that can be confusing, especially after setting up a Proxmox VE host the first time.

    Task logs can now be downloaded directly as text files for further inspection.
    It is now possible to choose the sort-order of the resource tree and to sort guests by name.
    Fix loading of changelogs in case additional package repositories are configured.
    Improve editing of backup jobs:
        Add a filter to the columns of the guest selector.
        Show selected, but non-existing, guests.
    Remove the "Storage View" mode from the resource tree panel.

    This mode only showed the storage of a cluster and did not provide additional information over the folder or server views.

    The Proxmox Backup Server specific columns for verification and encryption status can now be used for sorting in the backup content view of a storage.
    Polish the user experience of the backup schedule simulator by splitting the date and time into two columns and better check the validity of the input fields.
    Improve accessibility for screens with our minimal required display resolution of 720p
        add scrolling overflow handler for the toolbar of the backup job view
        rework the layout of the backup job info window for better space usage and reduce its default size
    Fix search in "Guests without backup" window.
    Node and Datacenter resource summary panels now show the guest tag column by default.
    Show role privileges when adding permissions.
    Allow the use of the `-` character in snapshot names, as the backend has supported this for some time.
    Update the noVNC guest viewer to upstream version 1.4.0.
    Fix overly-strict permission check that prevented users with only the VM.Console privilege from accessing the noVNC console.
    Align permission check for bulk actions with the ones enforced by the API.

    Switch the check from the Sys.PowerMgmt privilege to the correct VM.PowerMgmt one.

    Invalid entries in advanced fields now cause the advanced panel to unfold, providing direct feedback.
    HTML-encode API results before rendering as additional hardening against XSS.
    Fix preselection of tree elements based on the URL after login.
    Fix race condition when switching between the content panel of two storage before one of them hasn't finished loading.
    Metric server: Expose setting the verify-certificate option for InfluxDB as advanced setting
    Replace non-clickable checkbox with icons for backup jobs, APT repositories, and replication jobs.
    Fix error when editing LDAP sync setting and only a single parameter is not set to a non-default value.
    Add missing online-help references for various panels and edit windows.
    Improved translations, among others:
        Arabic
        French
        German
        Italian
        Japanese
        Russian
        Slovenian
        Simplified Chinese

Virtual Machines (KVM/QEMU)

    New QEMU Version 7.2:
        QEMU 7.2 fixes issues with Windows Guests, installed from a German ISO, during installation of the VirtIO drivers.
        Fix crash of VMs with iSCSI disks on a busy target.
        Fix rare hang of VMs with IDE/SATA during disk-related operations like backup and resize.
        Many more changes, see the upstream changelog for details.
    Taking a snapshot of a VM with large disks following a PBS backup occasionally was very slow. This has been fixed (issue #4476).
    Running fsfreeze/fsthaw before starting a backup can now optionally be disabled in the QEMU guest agent options.

    Note: Disabling this option can potentially lead to backups with inconsistent filesystems and should therefore only be disabled if you know what you are doing.

    Cloning or moving a disk of an offline VM now also takes the configured bandwidth limits into consideration (issue #4249).
    Fix an issue with EFI disks on ARM 64 VMs.
    Add safeguards preventing the moving of disks of a VM using io_uring to storage types that have problems with io_uring in some kernel versions.
    General improvements to error reporting. For example, the error messages from query-migrate are added when a migration fails and a configured, but non-existing physical CD-ROM drive, results in a descriptive error message.
    Allow users to destroy a VM even if it's suspended.
    Fix a race-condition when migrating VMs on highly loaded or slower clusters, where the move of the guest's config file to the target node directory might not have been propagated to the target node.
    Rolling back a VM to a snapshot with state (memory) and still selecting to start the VM after the rollback does not cause an error anymore (rollbacks with state result in a running VM).
    Deleting snapshots of running VMs, with a configured TPM on Ceph storages with krbd enabled, is now possible.
    Fix command execution via pvesh and QEMU guest agent in VMs on other cluster nodes.
    Update Linux OS version description to include 6.x kernels.

Containers (LXC)

    Update to LXC 5.0.2 and lxcfs 5.0.3.
    Allow riscv32 and riscv64 container architectures through the binfmt_misc kernel capability.

    After installing the qemu-user-static and binfmt-support packages one can use a RISC-V based rootfs image to run as container directly on an x86_64/amd64 Proxmox VE host.

    Create /etc/hostname file on Alma Linux, CentOS, and Rocky Linux containers. With this, DHCP requests sent by the container now include its hostname.
    Add option to disconnect network interfaces of containers, similarly to network interfaces of VMs.
    Make container start more resilient after OOM or node crash (empty AppArmor profile files do not cause a crash).
    Improve cleanup upon failed restores (remove the container configuration if restore fails due to an invalid source archive, remove firewall configuration).
    Ignore bind or read-only mount points when running pct fstrim.
    During container shutdown, wait with a timeout in case lxc-stop fails. This prevents the shutdown task from running indefinitely and having to be aborted manually.
    Templates:
        Updated Debian Bullseye template from 11.3 to 11.6.
        Updated Proxmox Mail Gateway template from 7.0 to 7.2.

General improvements for virtual guests

    The "Bulk Stop" action was renamed to "Bulk Shutdown" to better describe its behavior.
    Allow overriding timeout and force-stop settings for bulk shutdowns.
    Allow bulk actions even if the user does not have the required privileges for all guests but has the privileges for each guest involved in the bulk action.

HA Manager

    Add CRM command to switch an online node manually into maintenance (without reboot).

    When a node goes into maintenance mode all active HA services will be moved to other nodes, but automatically migrate them back once the maintenance mode is disabled again.

    The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.

    One can now enable the ha-rebalance-on-start option in the datacenter.cfg or via the web UI to use Proxmox CRS to balance on service start up.

    A new intermediate state request_started has been added for the stop -> start transitions of services.
    Improve scheduling algorithm for some cases.
        make CPU load matter more if there is no memory load at all

        avoids boosting tiny relative differences over higher absolute loads.

        use a non-linear averaging algorithm when comparing loads.

        The previous algorithm was blind in cases where the static node stats are the same and there is (at least) one node that is over committed when compared to the others.

Improved management for Proxmox VE clusters

    Ensure that the current working directory is not in /etc/pve when you set up the cluster using the pvecm CLI tool.

    Since pmxcfs, which provides the mount point for /etc/pve, is restarted when you set up the cluster, a confusing "Transport endpoint is not connected" error message would be reported otherwise.

    The proxmox-offline-mirror tool now supports fetching data through an HTTP proxy.
    Fetching the changelog of package updates has been improved:
        The correct changelog will be downloaded if repositories from multiple Proxmox projects are configured, for example if one has Proxmox VE and Proxmox Backup Server installed on the same host.
        Support getting the for packages coming from a Debian Backports repository.
    You can now configure if you want to receive a notification mail for new available package updates.
    The wrapper for acme.sh DNS-validation plugins received fixes for 2 small issues:
        a renaming of parameters for the acmedns plugin was pulled from upstream.
        a missing method was added to fix an issue with the dns_cf.sh plugin.
    Improved pvereport: In order to provide a better status overview, add the following information:
        /etc/pve/datacenter.cfg.
        ceph health detail.
    OpenSSL errors are now reported in full to ease troubleshooting when managing the nodes certificate.
    Add missing or newly added/split-out packages to the Proxmox VE apt version API, also used for the pveversion -v call:
        proxmox-mail-forward
        proxmox-kernel-helper
        libpve-rs-perl

Backup/Restore

    Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.

Storage

    It is now possible to override the specific subdirectories for content (ISOs, container templates, backups, guest disks) to custom values through the content-dirs option.
    The CIFS storage type can now also directly mount a specific subdirectory of a share, thus better integrating into already existing environments.
    The availability check for the NFSv4 storage type was reworked in order to work with setups running without rpcbind.
    Fix ISO upload via HTTP in a few edge cases (newlines in filenames, additional headers, not sent by common browsers).
    Fix caching volume information for systems which both have a local ZFS pool storage and a ZFS over iSCSI storage configured during guest disk rescan.

Storage Replication

    Extend support for online migration of replicated VM guests.

    One can now also migrate VMs if they included snapshots, as long as those are only on replicated volumes.

Disk Management

    Improve showing the SMART values for the correct NVMe devices.

Ceph

    Expose more detailed OSD information through the API and use that to add an OSD Detail window in the web interface.

    You can now check the backing device, logical volume info, front- and back- network addresses and more using the new OSD detail window.

    Show placement groups per OSD in the web interface.
    Improve schema description for various Ceph-related API endpoints.

    This also improves the api-viewer and pvesh tool for various Ceph-related API endpoints.

    Fix broken cmd-safety endpoint that made it impossible for non-root users to stop/destroy OSDs and monitors.
    Allow admins to easily set up multiple MDS per node to increase redundancy if more than one CephFS is configured.

Access Control

    ACL computation was refactored causing a significant performance improvement (up to a factor of 450) on setups with thousands of entries.
    It is now possible to override the remove-vanished settings for a realm when actively syncing it in the GUI.
    Allow quoted values in LDAP DN attributes when setting up an LDAP realm.

Firewall & Software Defined Networking

    ipsets can be added even with set host-bits. For example, 192.0.2.5/24 is now a valid input. Host-bits get cleared upon parsing (resulting in 192.0.2.0/24 in the example).
    Firewall logs can be restricted to a timeframe with the since and until parameters to the API call
    The conditional loading of nf_conntrack_helpers was dropped for compatibility with kernel 6.1.
    Not adding link-local IPv6 addresses on the internal guest-communication devices was fixed in a corner-case.
    The MTU is now set to the value of the parent bridge on the automatically generated VLAN-bridge devices for non-VLAN-aware bridges.
    The EVPN plugin now also merges a defined prefix-list from /etc/frr/frr.conf.local.

Installation ISO

    the version of BusyBox shipped with the ISO was updated to version 1.36.0.
    The EFI System Partition (ESP) defaults to 1 GiB of size if the root disk partition (hdsize) is bigger than 100 GB.
    UTC can now be selected as timezone during installation.

Notable bug fixes

    An issue with OVS network configuration where the node would lose connectivity when upgrading Open vSwitch (see https://bugs.debian.org/1008684).
    A race condition in the API servers causing failed tasks when running a lot of concurrent API requests was fixed.

Known Issues & Breaking Changes

    In QEMU 7.2, it is a hard error if audio initialization fails rather than a warning.

    This can happen, for example, if you have an audio device with SPICE driver configured but are not using SPICE display. To avoid the issue, make sure the configuration is valid.

    With pve-edk2-firmware >= 3.20221111-1 we know of two issues affecting specific set ups:
        virtual machines using OVMF/EFI with very little memory (< 1 GiB) and certain CPU types (e.g. host) might no longer boot.

        Possible workarounds are to assign more memory or to use kvm64 as the CPU type.
        The background for this problem is that OVMF << 3.20221111-1 used to guess the address (bit) width only from the available memory, but now there is more accurate detection that better matches what the configured CPU type provides. The more accurate address-width can lead to a larger space requirement for page tables.

        The (non-default) PVSCSI disk controller might result in SCSI disk not being detected inside the guess in regressions.

        We're still investigating this, until then you might either evaluate if your VM really requires the non-standard PVSCSI controller, use the SATA bus instead, or keep using the older pve-edk2-firmware package.

[close]

http://www.proxmox.com/downloads
Titel: Proxmox VE 8.0
Beitrag von: SiLæncer am 22 Juni, 2023, 21:50
Here is a selection of the highlights of the Proxmox VE 8.0 final version

    Debian 12, but using a newer Linux kernel 6.2
    QEMU 8.0.2, LXC 5.0.2, ZFS 2.1.12
    Ceph Server:
    Ceph Quincy 17.2 is the default and comes with continued support.
    There is now an enterprise repository for Ceph which can be accessed via any Proxmox VE subscription, providing the best stability for production systems.
    Additional text-based user interface (TUI) for the installer ISO.
    Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.
    Add access realm sync jobs to conveniently synchronize users and groups from an LDAP/AD server automatically at regular intervals.
    New default CPU type for VMs: x86-64-v2-AES
    Resource mappings: between PCI(e) or USB devices, and nodes in a Proxmox VE cluster.
    Countless GUI and API improvements.

As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release Notes: https://pve.proxmox.com/wiki/Roadmap

https://www.proxmox.com/downloads
Titel: Proxmox VE 8.1
Beitrag von: SiLæncer am 24 November, 2023, 20:50
Highlights

    Support for Secure Boot: This version is now compatible with Secure Boot. This security feature is designed to protect the boot process of a computer by ensuring that only software with a valid digital signature launches on a machine. Proxmox VE now includes a signed shim bootloader trusted by most hardware's UEFI implementations. This allows installing Proxmox VE in environments with Secure Boot active.

    Software-defined Network (SDN): With this version the core Software-defined Network (SDN) packages are installed by default. The SDN technology in Proxmox VE enables to create virtual zones and networks (VNets), which enables users to effectively manage and control complex networking configurations and multitenancy setups directly from the web interface at the datacenter level. Use cases for SDN range from an isolated private network on each individual node to complex overlay networks across multiple Proxmox VE clusters on different locations. The benefits result in a more responsive and adaptable network infrastructure that can scale according to business needs.

    New Flexible Notification System: This release introduces a new framework that uses a matcher-based approach to route notifications. It lets users designate different target types as recipients of notifications. Alongside the current local Postfix MTA, supported targets include Gotify servers or SMTP servers that require SMTP authentication. Notification matchers determine which targets will get notifications for particular events based on predetermined rules. The new notification system now enables greater flexibility, allowing for more granular definitions of when, where, and how notifications are sent.

    Support for Ceph Reef and Ceph Quincy: Proxmox Virtual Environment 8.1 adds support for Ceph Reef 18.2.0 and continues to support Ceph Quincy 17.2.7. The preferred Ceph version can be selected during the installation process. Ceph Reef brings better defaults improving performance and increased reading speed.


[close]

Release Notes: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.1

https://www.proxmox.com/downloads
Titel: Proxmox VE 8.2
Beitrag von: SiLæncer am 24 April, 2024, 21:45
Highlights

    Import Wizard for VMware ESXi VMs: Proxmox VE provides an integrated VM importer presented as storage plugin for native integration into the API and web-based user interface. It offers users the ability to import guests directly from other hypervisors. Currently, it allows to import VMware-based VMs (ESXi and vCenter). You can use this to import the VM as a whole, with most of the original configuration settings mapped to Proxmox VE's configuration model.

    Automated and Unattended Installation: Proxmox offers a new ‘proxmox-auto-install-assistant’ tool that fully automates the setup process on bare-metal. Automated installation allows for the rapid deployment of Proxmox VE hosts without the need for manual access to the systems, saving time and reducing the risk of errors. To use this method, an answer file must be prepared with the necessary configuration settings for the installation process. This file can be provided directly in the ISO, on an additional disk such as a USB flash drive, or over the network. Automated installation is useful in various scenarios, such as deploying large-scale infrastructure, automating the setup process, and ensuring consistent configurations across multiple systems.

    Backup Fleecing: When creating a backup of a running VM, a slow backup target can negatively impact guest IO performance during the backup process. Fleecing can reduce this impact by caching data blocks in a fleecing image rather than sending it directly to the backup target, which can help guest IO performance and even prevent hangs at the cost of requiring more storage space. Backup fleecing is especially beneficial when backing up IO-heavy guests to a remote Proxmox Backup Server or other backup storage with a slow network connection.

    Firewall modernization with nftables (technology preview): Proxmox VE comes with a new firewall implementation that uses nftables instead of iptables. The opt-in feature in tech preview is written in the Rust programming language. Although the new implementation is close to feature parity with the existing one, the nftables firewall must be enabled manually and remains a preview to first gather feedback from the community.

[close]

Release Notes: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2

https://www.proxmox.com/downloads