EL:RC Script
Click Here - https://urluss.com/2tkRq1
In the post [Running a Python + OpenCV script on reboot, see resources] he explains how to automatically run a Python script when a Raspberry Pi starts. He uses python virtual environments, so the first 2 commands are focused on to load the virtual env. Then, move to the app folder and run the python script.
Note: Before moving forward, I may need to add some context. I need to run my python script in a Terminal. My device will always auto-start with a 3.5 inches touch screen and a camera, so I need some GUI loaded.
Use this method to install the guest environment if you can connect to thetarget instance using SSH. If you can't connect to the instance to installthe guest environment, you can insteadinstall the guest environment by cloning its boot disk and using a startupscript.
As the replacement instance start up, the temporary rc.local scriptruns and installs the guest environment.To watch the progress of this script, inspect the console logs for linesemitted by the temporary rc.local script. To view logs, run thefollowing command:
In Unix systems such as System III and System V, the design of init has diverged from the functionality provided by the init in Research Unix and its BSD derivatives. Up until recently, most Linux distributions employed a traditional init that was somewhat compatible with System V, while some distributions such as Slackware use BSD-style startup scripts, and others such as Gentoo have their own customized versions.
Research Unix init runs the initialization shell script located at /etc/rc,[1] then launches getty on terminals under the control of /etc/ttys.[2] There are no runlevels; the /etc/rc file determines what programs are run by init. The advantage of this system is that it is simple and easy to edit manually. However, new software added to the system may require changes to existing files that risk producing an unbootable system.
A fully modular system was introduced with NetBSD 1.5 and ported to FreeBSD 5.0 and successors. This system executes scripts in the /etc/rc.d directory. Unlike System V's script ordering, which is derived from the filename of each script, this system uses explicit dependency tags placed within each script.[7] The order in which scripts are executed is determined by the rcorder utility based on the requirements stated in these tags.
At any moment, a running System V is in one of the predetermined number of states, called runlevels. At least one runlevel is the normal operating state of the system; typically, other runlevels represent single-user mode (used for repairing a faulty system), system shutdown, and various other states. Switching from one runlevel to another causes a per-runlevel set of scripts to be run, which typically mount filesystems, start or stop daemons, start or stop the X Window System, shutdown the machine, etc.
The thing is, I just don't understand how this works. Obviously my /etc/init.d/wdm script is called, because when I put an early exit in there, wdm is not started. But when I alternatively rename the /etc/rc3.d directory (my default runlevel used to be 3), then wdm is still started.
The compatibility with van Smoorenburg rc scripts is achieved with a conversion program, named systemd-sysv-generator. This program is listed in the /usr/lib/systemd/system-generators/ directory and is thus run automatically by systemd early in the bootstrap process at every boot, and again every time that systemd is instructed to re-load its configuration later on.
This program is a generator, a type of ancillary utility whose job is to create service unit files on the fly, in a tmpfs where three more of those nine directories (which are intended to be used only by generators) are located. systemd-sysv-generator generates the service units that run the van Smoorenburg rc scripts from /etc/init.d, if it doesn't find a native systemd service unit by that name already existing in the other six locations.
Received wisdom is that the van Smoorenburg rc scripts must have an LSB header, and are run in parallel without honouring the priorities imposed by the /etc/rc.d/ system. This is incorrect on all points.
In fact, they don't need to have an LSB header, and if they do not systemd-sysv-generator can recognize the more limited old RedHat comment headers (description:, pidfile:, and so forth). Moreover, in the absence of an LSB header it will fall back to the contents of the /etc/rc.d symbolic link farms, reading the priorities encoded into the link names and constructing a before/after ordering from them, serializing the services. Not only are LSB headers not a requirement, and not only do they themselves encode before/after orderings that serialize things to an extent, the fallback behaviour in their complete absence is actually significantly non-parallelized operation.
The reason that /etc/rc3.d didn't appear to matter is that you probably had that script enabled via another /etc/rc.d/ directory. systemd-sysv-generator translates being listed in any of /etc/rc2.d/, /etc/rc3.d/, and /etc/rc4.d/ into a native Wanted-By relationship to systemd's multi-user.target. Run levels are \"obsolete\" in the systemd world, and you can forget about them.
Systemd is backward compatible with SysV init scripts. According to LSB 3.1, the init script must have informational Comment Conventions, defining when the script has to start/stop and what is required for the script to start/stop. This is an example:
But there is one point, where systemd and SysV differ in terms of init scripts. SysV executes the scripts in sequential order based on their number in the filename. Systemd doesn't. If dependencies are met, systemd runs the scripts immediately, without honoring the numbering of the script names. Some of them will most probably fail because of the ordering. There are a lots of other incompatibilities that should be considered.
We're enhancing the task of configuring VMs to use cloud-init instead of the Linux Agent in order to allow existing cloud-init customers to use their current cloud-init scripts, or new customers to take advantage of the rich cloud-init configuration functionality. If you have existing investments in cloud-init scripts for configuring Linux systems, there are no additional settings required to enable cloud-init process them.
Once the VM has been provisioned, cloud-init will run through all the modules and script defined in --custom-data in order to configure the VM. If you need to troubleshoot any errors or omissions from the configuration, you need to search for the module name (disk_setup or runcmd for example) in the cloud-init log - located in /var/log/cloud-init.log.
Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the runcmd module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
The objective of run level script feature is to allow customers to start and stop selected applications by changing the run level. The directories are provided for customers to place their own stop and start scripts.
The system will automatically run the start or kill scripts when entering a given run level, then proceed to run all or start scripts to start up the applications necessary at that level. In this manner, some applications could be stopped while others started when entering a run level.
When shutting down the system or rebooting using the /usr/sbin/shutdown command, the Kill scripts for every run level will be run. This ensures all custom applications are finished before fully shutting down AIX.
Be sure to test this executable by running it from the command line. The first time you run this shell script, you should see a new file, /root/mystartup.log, with a time and date along with the text, \"Startup worked\". We create this log file and add lines to it every time the script is run as a simple test to ensure that our script is working.
If the program needs to both start and stop (that is, there arespecial things you need to do when the system is shutting down likeclean up temporary files, etc.) then it should have proper S and Kscripts in the /etc/rc2.d hierarchy.
Typically, you want something to start at run level 2- when thesystem goes multi-user. If you examine inittab, you'll see that itcalls /etc/rc2 with the \"wait\" keyword when it enters runlevel 2(other systems, such as Linux do the same thing, although scriptnames may be different). The /etc/rc2 script is a \"superscript\"- itcalls other scripts. You could just add your command to /etc/rc2itself, but that's not the way other administrators would expectyou do do it.
What you are expected to do is put a script in /etc/rc2.d. Itneeds to be named so that \"prc_sync\" (SCO) or /etc/rc.d/rc (Linux)will recognize it. That means it will begin with an I, K, S or P.So that you can control when it runs in relation to the otherscripts in /etc/rc2.d, you name it so that, beginning its secondletter, it sorts alphabetically to the position you want. That'salphabetic, NOT numeric- so S100mine will execute before S80lp. Thefirst letter is ignored (but it has to be S, I or P for it to runas the system starts- and that has to be uppercase S, I or P-anything else will be ignored by prc_sync).
Linux uses only S and K at this time. Scripts that begin with\"K\" are \"Kill\" scripts- normally used to stop your process (ifnecessary) as the system goes down (technically, as it LEAVES therun level). Note that your process will be killed anyway; you onlyneed a K script if you need to handle the death specially. Manyexisting K scripts are links to S or P scripts- this works becausethe control script that calls them uses \"start\" as an argument forthe startup scripts, and \"stop\" for the kill scripts- the scriptssimply test their arguments and act accordingly. 59ce067264