Article Contents
CLI
The CLI's logging configuration is located in~/.cloudify/config.yaml
, under the {{logging}} directive.
The structure of the logging
directive is coupled to the logic implemented by the CLI's logging facility (located at https://github.com/cloudify-cosmo/cloudify-cli/blob/4.3/cloudify_cli/logger.py).
- If the
config.yaml
file is missing, it is created using hard-coded defaults. - Otherwise, it is read and parsed by the CLI's logging facility.
logging
directive.
The default logging configuration sends all logs into ~/.cloudify/logs/cli.log
, and enables only a couple of loggers at INFO
level.
If -vvv
is provided in the command-line, the CLI will automatically set all of its configured loggers to DEBUG
level, regardless of the logging configuration.
Agents
For this section, we will use the token AGENT_DIR to signify the location of the agent's installation.- On Linux,
AGENT_DIR
is a subdirectory of the agent user's home directory (specified in theagent_config
property). The subdirectory is named after the node instance ID of the relevantcloudify.nodes.Compute
node. For example, if the agent's user iscentos
, its home directory is/home/centos
, and the node instance ID isserver_a1b2c3
, thenAGENT_DIR
would be/home/centos/server_a1b2c3
. - On Windows, the location is
C:\Program Files\Cloudify Agents\<node-instance-id>
.
AGENT_DIR/work
In AGENT_DIR/work
, there are files of the form:
<node_instance_id><pid>.log
node_instance_id
will always be the same; that's the node instance ID of that same Compute node.
The number that follows is the PID
of the specific Celery process. Each Celery process logs into its own file, for serialization purposes. One of these processes (you can identify them by looking at the OS's process list and looking for python
or celery
) is the Celery master process; the others are worker processes.
- The master process doesn't perform tasks; its role is to connect to RabbitMQ and wait for tasks. Once a task is received, it is dispatched to an available worker process. Occasionally, the master process will kill worker processes and restart them, resulting in new log files being created (as new processes will get their own PID).
- A worker process actually performs orchestration tasks. You will find logs related to task execution in the worker processes' logs.
AGENT_DIR/work/logs/<deployment_id>
I haven't yet figured out what this log file is for. I do know that it's maintained by the Celery master process, and it logs (among other things) REST API calls to the manager.
*.err / *.out
AGENT_DIR/work/logs/tasks, *.err / *.out
This is new with 4.3. You will find sets of files there, each set comprises of a base name of a UUID, with extensions of .out
and .err
.
These files contain the standard output and standard error streams of scripts invoked by the script plugin, on that agent. The files are written-to in real time. Tailing them may be very useful for troubleshooting.
Manager
The vast majority of the Cloudify Manager logs are located in/var/log/cloudify
. There is only one exception (Riemann).
REST Service
This is likely to be the starting point when troubleshooting any "internal server error" or "HTTP 500" error messages. The REST service's logs are located at/var/log/cloudify/rest
.
gunicorn.log
contains the log file of gunicorn. Errors there usually mean that the REST service has a system-level problem that isn't really related to the Cloudify REST functionality, so you may want to pay attention to this file.gunicorn-access.log
contains a summary of each REST API call processed by the REST service.cloudify-rest-service.log
contains the log of our actual REST API implementation. This is usually the most useful file when it comes to troubleshooting.
Management Workers
The most useful log files are those produced by the management workers, under/var/log/cloudify/mgmtworker
.
Logging Level
By default, the management workers log inINFO
level.
To change that:
- Edit
/etc/sysconfig/cloudify-mgmtworker
- Change the value of the
CELERY_LOG_LEVEL
variable fromINFO
to something else (such asDEBUG
). -
Restart the management workers (note: currently-running workflows will stop and will not be resumable):
sudo systemctl restart cloudify-mgmtworker
cloudify.management_worker.log
This file is shared by all management worker processes, and contains basic information about tasks, from Celery perspective:
- Log of connection to RabbitMQ.
- Task acceptance.
- Task completion, including error details.
logs/__system__.log
logs/<deployment_id>.log
ctx logger
printouts.
Nginx
The Nginx logs are located in/var/log/cloudify/nginx
.
Nginx logs access and errors for the following components:
Component | Access Log | Error Log |
---|---|---|
File Server | cloudify-files.log |
error.log |
REST API | cloudify.access.log |
cloudify.error.log |
- An "Access Log" shows basic HTTP request and response information for all HTTP requests.
- An "Error Log" shows HTTP request and response information for all HTTP requests that ended with a non-OK response code (
4xx
and5xx
HTTP response codes).
access.log
file, however under normal circumstances it should be zero-length. If this file is not zero-length, please let us know because it implies there's a gap in Nginx's logging configuration.
Manager Installer
The Manager's installation script (cfy_manager
), new with 4.3, logs into /var/log/cloudify/manager/cfy_manager.log
.
This file is only updated during installation or re-configuration of the manager.
UI
The UI logs are located in/var/log/cloudify/stage
.
- The
apps
directory contains the logs of the actual UI application. access.log
contains information about incoming HTTP requests, as well as HTTP response code for each request.errors.log
contains a subset ofaccess.log
: only HTTP requests that ended up with errors will be shown here.
Logstash
Logstash's logs are located in/var/log/cloudify/logstash
.
Logstash is a Java application, hence:
logstash.stdout
contains the standard output stream of the JVM in which Logstash runs. You normally won't find much help in these logs.logstash.err
contains the standard error stream of the JVM. You are unlikely to find anything useful here, unless the JVM ended abnormally.logstash.log
contains the actual log of the Logstash application.
logstash.log
. Should you ever need to adjust Logstash's logging configuration, refer to: https://www.elastic.co/guide/en/logstash/current/logging.html
InfluxDB
InfluxDB's logs are located in/var/log/cloudify/influxdb
.
PostgreSQL
PostgreSQL logs are actually written to/var/lib/pgsql/9.5/data/pg_log
, but we hold a symlink to this directory at /var/log/cloudify/postgresql/pg_log
.
RabbitMQ
RabbitMQ's logs are located at/var/log/cloudify/rabbitmq
.
rabbit@<hostname>.log
is the main RabbitMQ log file.rabbit@<hostname>-sasl.log
contains logging information pertaining to authentication and authorization.
Composer
Logs are located in/var/log/cloudify/composer
.
Riemann
Riemann has three log files:/var/log/riemann/riemann.log
contains nothing./var/log/cloudify/riemann/riemann.log
contains the actual Riemann logs./tmp/riemann.log
contains all Riemann's logging before we actually configure Riemann logging. For most purposes, this file is entirely useless.
Comments
0 comments
Article is closed for comments.