Introduction¶
- Phoenix (the bird)
- Phoenix is a long-lived bird that is cyclically regenerated or reborn. (Wikipedia). [..]
Pyramid Phoenix is a web-application build with the Python web-framework pyramid. Phoenix has a user interface to make it easier to interact with Web Processing Services. The user interface gives you the possibility to register Web Processing Services. For these registered WPS services you can see which Processes they have available. You are provided with a form page to enter the parameters to execute a process (job). You can monitor the jobs and see the results.
In the climate science community many analyses are using climate data in the NetCDF format. Phoenix uses the Malleefowl WPS which provides processes to access NetCDF files from the ESGF data archive. Malleefowl provides a workflow process to chain ESGF data retrieval with another WPS process which needs NetCDF data as input. Phoenix has a Wizard to collect the parameters to run such a workflow with a process of a registered WPS.
Phoenix should help developers of WPS processes to use their processes more conveniently, especially for feeding their processes with different data sources (like ESGF data archive). Phoenix is also used for demonstration of available WPS processes.
Phoenix has a more generic and technical user interface. To use Phoenix successfully you need to have some knowledge about WPS and the existing data archives. So, Phoenix might not become a good choice for scientists users who just want to run a specific analyses job. There are other climate portals available which address these users. But Phoenix should at least become developer friendly.
Phoenix is easy to install using the anaconda python distribution and buildout. So, Phoenix is not only available on production sites where it is close to data archives. You can also install it on your developer machine to make testing of your developed WPS processes easier and to present them to other people.
Installation¶
This installation works on Linux 64-bit (Ubuntu 14.04, Centos 6, …). It might still work on MacOSX but packages are updated only from time to time. Most of the dependencies come from Anaconda Python distribution system. Additional conda packages come from the Binstar channel Birdhouse. The installation is done with Buildout.
Phoenix uses WPS processes provided by Malleefowl. As a requiste you should install a local Malleefowl WPS (this will become part of the Phoenix installer). Alternatively you could configure the WPS URL of a running Malleefowl WPS instance in the Phoenix custom.cfg
.
To install Malleefowl follow the instructions given in the Malleefowl documentation. In short:
$ git clone https://github.com/bird-house/malleefowl.git
$ cd malleefowl
$ make clean install
Now start with downloading Phoenix with sources from github:
$ git clone https://github.com/bird-house/pyramid-phoenix.git
$ cd pyramid-phoenix
For install options run make help
and read the documention for the Makefile.
Before installation you need to create a password for the local phoenix
user which is used to login to the Phoenix web application:
$ make passwd
Generate Phoenix password ...
Enter a password with at least 8 characters.
Enter password:
Verify password:
Run 'make install restart' to activate this password.
Optionally take a look at custom.cfg
and make additional changes. When you’re finished, run make clean install
to install Phoenix:
$ make clean install
You always have to rerun make update
after making changes in custom.cfg.
After successful installation you need to start the services. All installed files (config etc …) are below the conda environment birdhouse
which is by default in your home directory ~/.conda/envs/birdhouse
. Now, start the services:
$ make start # starts supervisor services
$ make status # shows status of supervisor services
Phoenix web application is available on http://localhost:8081.
Check the log file for errors:
$ tail -f ~/birdhouse/var/log/supervisor/phoenix.log
$ tail -f ~/birdhouse/var/log/supervisor/celery.log
Run Docker¶
Set the HOSTNAME
environment variable (not localhost
) and run docker-compose
:
HOSTNAME=phoenix HTTP_PORT=8081 HTTPS_PORT=8443 SUPERVISOR_PORT=9001 docker-compose up
Configuration¶
You can configure Phoenix by editing custom.cfg
in the Phoenix source folder:
$ cd pyramid-phoenix
$ vim custom.cfg
$ cat custom.cfg
[settings]
hostname = localhost
http-port = 8081
https-port = 8443
log-level = INFO
# run 'make passwd' and to generate password hash
phoenix-password = sha256:#######################
esgf-search-url = http://example.org/esg-search
wps-url = http://localhost:8091/wps
# register at github: https://github.com/settings/applications/new
github-consumer-key = ########################
github-consumer-secret = ############################
By default Phoenix runs on localhost. The HTTP port 8081 is redirected to the HTTPS port 8443.
If you want to use a different hostname/port then edit the default values in custom.cfg
:
[settings]
hostname = localhost
http-port = 8081
https-port = 8443
To be able to login with the phoenix
admin user you need to create a password. For this run:
$ make passwd
To activate the GitHub login for external users you need to configure a GitHub application key for your Phoenix web application:
[settings]
# register at github:
github-consumer-key = ########################
github-consumer-secret = ############################
See the GitHub Settings on how to generate the application key for Phoenix.
If you want to use a different Malleefowl WPS service then change the wps-url
value:
[settings]
wps-url = http://localhost:8091/wps
If you want to use a differnet ESGF index service then change the esgf-search-url
value:
[settings]
esgf-search-url = http://example.org/esg-search
After any change to your custom.cfg
you need to run make update
again and restart the supervisor
service:
$ make update # or install
$ make restart
User Guide¶
The user guide explains how to use the Phoenix web application to interact with Web Processing Services.
Login¶
Press the Sign in
button in the upper right corner.

The login page offers you several options to login to Phoenix.

You can login using your ESGF OpenID or your GitHub account. If you login for the first time your account needs to be activated by an administrator.
If you are Phoenix admin you can also enter the admin password here.
ESGF OpenID
You can use an ESGF OpenID. The ESGF OpenID is used later to access files from ESGF. Make sure, that you have a valid ESGF OpenID of one of the ESGF Providers (for example DKRZ) and that you are able to download a datafile (you need to register for CMIP5 and CORDEX).
Enter the account name of your ESGF OpenID and choose the according ESGF OpenID provider (by default this is DKRZ).

Processes¶
When you have registered WPS services you can run a process. Go to the
Processes
tab.

Choose one of your registered WPS services. You will get a list of available processes (WPS GetCapabilities
request).

Choose one of these processes by using the Execute
button.
In case of Emu you may try the Hello World
process. You will then be
prompted to enter your username:

Press the Submit
button. When the process is submitted you will be shown your job list in Monitor
.
Monitor¶
In Monitor
all your running or finished jobs are listed.
The list shows the status and progress of your jobs.

When a job has finished with success you can see the results by clicking the Details
button.

If the result has a document (XML, text, NetCDF, …) you can view or download this document with the Download
button.
Wizard¶
The wizard is used to chain WPS processes and to collect the input parameters for the processes. Currently the wizard chains a user WPS process with a WPS process to retrieve ESGF data. The chained processes are run with a workflow management system which is available as WPS process in Malleefowl.
Go to the Wizard
tab. Enter the
appropiate parameters and use Next
to get to the next wizard
page.

You need to choose a WPS service (e.a. Malleefowl).

Choose a process (in case of Malleefowl only Dummy
).

Select the input parameter of the choosen process (mime-type application/netcdf).

Select the input source (ESGF).

Select an ESGF dataset (select categorie (blue) and values of this category (orange), current selection (green)).

Please select only one Dataset!
You will be prompted for your password of your OpenID if your certificate is not valid anymore.

On the final page you can enter some keywords for your process and mark it as favorite (when using a favorite you don’t
need to enter all parameters again). Press Done
and the job will be started and shown in your job list My Jobs
.

My Account¶
In My Account
you can change your user settings (user name, organisation, openid, …).

You can also see your current Twitcher access token which you can use to access a registered WPS service directly.

See the Twitcher Tutorial on how to use the token to access a WPS service.
Settings (admins only)¶
When you are logged-in as admin user you have the Settings
page. Here you can make administrative changes and monitor services.

Register a WPS or Thredds service¶
Open the Settings/Services
page. Here you can see which services are registered in the catalog service (we are using PyCSW). All theses services are known and useable by Phoenix.

To add a new WPS service, press the Register a new Service
button and enter the WPS URL in the field Service URL
:
- hummingbird: http://localhost:8092/wps
- flyingpigeon: http://localhost:8093/wps
- emu: http://localhost:8094/wps
For example, to register Malleefowl WPS:

To add a new Thredds service press the Register a new Service
button again, enter the Thredds URL and choose Thredds Catalog
as service type.

Activate Users¶
Open the Settings/Users
page. Here you activate/deactivate users and also remove them. When a user has registerd to the Phoenix web application the user needs to be activated before the user can login.
Choose Authentication Protocol¶
Open the Settings/Auth
page. Here you can choose the different authentication protocols (OpenID, LDAP, …) which users can use on the login page. Local Auth
enables the local admin account whose password is set in custom.cfg
in your Phoenix installation.

GitHub Support¶
You can use GitHub accounts to login to Phoenix. GitHub uses OAuth2. First you need to register your Phoenix application at GitHub. Then go to Settings/GitHub
in your Phoenix application and enter the GitHub Consumer Key/Client ID
and GitHub Consumer Secret/Client Secret
:

LDAP Support¶
Basic support for authentication via LDAP has been added recently. To enable LDAP login for your environment, login with your admin account, navigate to Settings/LDAP
and configure Phoenix to match your LDAP environment.

There is no support for LDAP authorization yet. Use the Settings/Users
backend to manage the access privileges for your users. There will be an entry for each user that has been logged in once before.
Solr¶
You can publish the datasets of a registered Thredds service to a Solr index server. The Solr server is setup with the Phoenix installation.

Use the toggle button on the left side of the Thredds service name to activate the publishing. Publishing takes some time. Use the reload button to update the status.
The Solr search can then be used in the Wizard
to select input files.
To clear the whole Solr index use the trash button.
The publisher has two parameters.
- maxrecords
- Maximum number of datasets that will be published. Use -1 for unlimited.
- depth
- The maximum depth level when crawling Thredds catalogs. Default is 2.

Tutorial¶
In the following tutorial will guide you though the first steps to get familliar with Phoenix.
Hello World¶
First you need to login. Please follow the login instructions in the user guide.
Select Emu WPS Service¶
For this example choose the Emu WPS service which has test processes. For this go to the Processes
tab.

Choose Hello World Process¶
With clicking on Emu you will get the list of available processes in Emu.

Enter Process Parameters¶
Click on Hello World and you will get a form to enter the process parameter:

Enter your name and click submit
.
Run CDO sinfo on data from Thredds Dataservice¶
First you need to login. Please follow the login instructions in the user guide.
Use the Wizard¶

Select Hummingbird WPS Service¶
For this example choose the Hummingbird WPS service which has CDO processes.

Choose “CDO sinfo” Process¶

Choose Input Parameter¶

Choose Thredds as Source¶

Choose Thredds Service¶

Choose Data from Thredds Catalog¶

Start Process¶

Display the outputs¶
Click on the Job ID link to get to the result of the submitted process.
Job Log

Job Outputs

Run CDO ensemble operation on CMIP5 data from ESGF¶
First you need to login. Please follow the login instructions in the user guide.
Use the Wizard¶

Select Hummingbird WPS Service¶
For this example choose the Hummingbird WPS service which has CDO processes.

Choose “CDO Ensembles Operation” Process¶

Choose CDO ensmean Operator¶

Choose Input Parameter¶

Choose ESGF as Source¶

Select ensembles of CMIP5 experiment¶

Start Process¶

Display the outputs¶
Click on the Details
button to get to the result of the submitted process.
Outputs

Map

Run CDO ensemble operation on CORDEX data from ESGF using OpenDAP¶
First you need to login. Please follow the login instructions in the user guide.
Search and select CORDEX ensembles¶
Activate ESGF Search

Update ESGF credentials if asked

Search CORDEX Ensemble

Select Files (OpenDAP)

Choose “CDO Ensembles Operation” Process¶

Choose CDO ensmean Operator and OpenDAP datasets¶

Display the outputs¶
Click on the Details
button to get to the result of the submitted process.
Outputs

Map

Creating a timeseries plot¶
First you need to login. Please follow the login instructions in the user guide.
Once the login procedure is done, processes are operable and data search and download within the ESGF data archive is possible.
In this timeseries plot example we will use the Flyingpigeon WPS. Make sure Flyingpigeon is installed and running and check that it is registered in Phoenix.
There are two ways to submit a job: either with Processes or Wizard.
While with Processes you can select single operational processes the Wizard is guiding you through the necessary steps to submit a job. For getting an idea of the operation procedure choose the Wizard tab:

You could choose a favorite here of a previous run job but in this case please choose No Favorite and click Next.
The following steps are necessary to run a visualisation job:
Select WPS Service¶
For this example choose the Flyingpigeon WPS service which has processes for the climate impact community.

Choose Process¶
With clicking on Next you’ll find the list of available processes. Check the Visualisation of NetCDF files.

Enter Process Parameters¶
Click on Next which guides you to the process parameter:

The values in the data files are stored with defined variable names. Here are the most common ones:
- tas – mean air temperaure at 2m (in Kelvin)
- tasmin – minimum air temperaure at 2m (in Kelvin)
- tasmax – maximum air temperaure at 2m (in Kelvin)
- pr – pricipitation fulx at surface (in kg/second)
- ps – air pressure at surface
- huss – specific humidiy (in Kg/Kg)
A list of available variable names used for CMIP5 and CORDEX experiment can be found in the Appendix B of the CORDEX archive design.
Select Data Source¶
In the next step you will choose the data source. Currently there is only the ESGF data archive:

Search Input Files¶
This is a search GUI to find appropriate files stored in ESGF data archive. By selecting a Search Categorie (blue buttons), you can choose the appropriate options (in orange).
In this example select the following parameter:
Categorie | Option |
---|---|
project | CORDEX |
domain | WAS-44 |
insitute | MPI-CSC |
variable | tas |
time_frequency | day |
Double selection (like two domains) can be realized with pressing Ctrl - tab.
For the visualisation process it is necessary that the selected variable (tas
) is the same as the
variable argument in the Process Parameters
And optionally you can set the time bounds:
Start: 2005-01-01T12:00:00Z
End: 2010-12-31T12:00:00Z
The Selection should look similar to the following screenshot:

Check your credentials¶
To access ESGF data you need an x509 proxy certificate from ESGF. You can update your certificate in My Account. The x509 proxy certificate is valid only for a few hours. The wizard checks if your certificate is still valid and if not you will be asked to update it on the following wizard page.

Start the process¶
On the final page Done of the wizard you can give some descriptive keywords for your process. You can also save it as a favorite so that later you can run the same job again.

Press Done and the job will start.
Monitor running Job¶
The job is now submitted and can be monitored on the My Jobs page:

The job is running … data will be downloaded and the analyzing of the data starts. In this case, a field mean over the several experiments will be performed and an appropriate timeline drawn.
When the job has finished, the status bar is turning into green:

Use the Birdhouse Solr Search in the Wizard¶
First you need to login. Please follow the login instructions in the user guide.
Prepare Solr Search (Admins only)¶
Register a thredds catalog in Settings/Services
. For example use:
http://www.esrl.noaa.gov/psd/thredds/catalog/Datasets/ncep.reanalysis2.dailyavgs/catalog.html
Index this Thredds Catalog in Settings/Solr
.
Use the Wizard¶

Select Hummingbird WPS Service¶
For this example choose the Hummingbird WPS service which has CDO processes.

Choose “CDO sinfo” Process¶

Choose Input Parameter¶

Choose Birdhouse Solr as Source¶

Choose Data from Solr Search¶

Start Process¶

Display the outputs¶
Click on the Job ID link to get to the result of the submitted process.
Job Log

Job Outputs

Troubleshooting¶
Phoenix does not start¶
Phoenix needs a running mongodb and pycsw service. Sometimes Phoenix is started when these service are not ready yet. In that case start theses services manually in the order mongodb, pycsw and Phoenix with:
$ source activate pyramid-phoenix # activate conda environment used by phoenix
$ supervisorctl restart mongodb
$ supervisorctl restart pycsw
$ supervisorctl restart phoenix
You can also try to restart all services with:
$ supervisorctl restart all
or:
$ make restart
Check the log files to see the error messages:
$ tail -f ~/birdhouse/var/log/supervisor/phoenix.log
$ tail -f ~/birdhouse/var/log/supervisor/celery.log
Nginx does not start¶
From a former installation there might be nginx files with false permissions. Remove those files:
$ ~/birdhouse/etc/init.d/supervisord stop
$ sudo rm -rf ~/birdhouse/var/run
$ sudo rm -rf ~/birdhouse/var/log
$ ~/birdhouse/etc/init.d/supervisord start
Sphinx AutoAPI Index¶
This page is the top-level of your generated API documentation. Below is a list of all items that are documented here.
catalog
¶
Module Contents¶
-
catalog.
includeme
(config)¶
-
catalog.
catalog_factory
(registry)¶
-
catalog.
_fetch_thredds_metadata
(url, title=None)¶ Fetch capabilities metadata from thredds catalog service and return record dict.
-
catalog.
_fetch_wps_metadata
(url, title=None)¶ Fetch capabilities metadata from wps service and return record dict.
-
class
catalog.
Catalog
¶ -
get_record_by_id
(identifier)¶
-
delete_record
(identifier)¶
-
insert_record
(record)¶
-
harvest
(url, service_type, service_name=None, service_title=None, public=False, c4i=False)¶
-
get_service_name
(record)¶ Get service name from twitcher registry for given service (url).
-
get_service_by_name
(name)¶ Get service from twitcher registry by given service name.
-
get_service_by_url
(url)¶ Get service from twitcher registry by given url.
-
get_services
(service_type=None, maxrecords=100)¶
-
clear_services
()¶
-
-
class
catalog.
CatalogService
(csw, service_registry)¶ -
get_record_by_id
(identifier)¶
-
delete_record
(identifier)¶
-
insert_record
(record)¶
-
harvest
(url, service_type, service_name=None, service_title=None, public=False, c4i=False)¶
-
get_services
(service_type=None, maxrecords=100)¶
-
-
catalog.
doc2record
(document)¶ Converts
document
from mongodb to aRecord
object.
-
class
catalog.
MongodbCatalog
(collection, service_registry)¶ Implementation of a Catalog with MongoDB.
-
get_record_by_id
(identifier)¶
-
delete_record
(identifier)¶
-
insert_record
(record)¶
-
harvest
(url, service_type, service_name=None, service_title=None, public=False, c4i=False)¶
-
get_services
(service_type=None, maxrecords=100)¶
-
clear_services
()¶
-
security
¶
see pyramid security:
Module Contents¶
-
security.
check_csrf_token
(request)¶
-
security.
has_execute_permission
(request, service_name)¶
-
security.
passwd_check
(request, passphrase)¶ code taken from IPython.lib.security TODO: maybe import ipython
>>> passwd_check('sha1:0e112c3ddfce:a68df677475c2b47b6e86d0467eec97ac5f4b85a', ... 'anotherpassword') False
-
security.
groupfinder
(userid, request)¶
-
class
security.
Root
(request)¶
-
security.
root_factory
(request)¶
-
security.
authomatic
(request)¶
-
security.
authomatic_config
(request)¶
-
security.
get_user
(request)¶
-
security.
includeme
(config)¶
exceptions
¶
all Exceptions defined by Phoenix …
wps
¶
Module Contents¶
-
wps.
is_opendap
(data_input)¶
-
wps.
check_status
(url=None, response=None, sleep_secs=2, verify=False)¶ Run owslib.wps check_status with additional exception handling.
Parameters: verify – Flag to enable SSL verification. Default: False Returns: OWSLib.wps.WPSExecution object.
-
wps.
appstruct_to_inputs
(request, appstruct)¶ Transforms appstruct to wps inputs.
-
class
wps.
WPSSchema
(request, hide_complex=False, process=None, use_async=False, user=None, **kw)¶ Build a Colander Schema based on the WPS data inputs.
This Schema generator is based on: http://colanderalchemy.readthedocs.io/en/latest/
TODO: fix dataType in wps client
-
add_async_check
()¶
-
add_nodes
(process)¶
-
literal_data
(data_input)¶
-
colander_literal_type
(data_input)¶
-
colander_literal_widget
(node, data_input)¶
-
bbox_data
(data_input)¶
-
complex_data
(data_input)¶
-
_url_node_default
(data_input)¶
-
bind
(**kw)¶
-
clone
()¶
-
utils
¶
Module Contents¶
-
class
utils.
ActionButton
(name, title=None, no_children=False, href=None, new_window=False, disabled=False, css_class="btn btn-default", icon=None)¶ -
url
(context, request)¶
-
permitted
(context, request)¶
-
-
utils.
pinned_processes
(request)¶
-
utils.
skip_csrf_token
(appstruct)¶
-
utils.
headline
(text, max_length=120)¶
-
utils.
localize_datetime
(dt, tz_name="UTC")¶ Provide a timzeone-aware object for a given datetime and timezone name
-
utils.
is_url
(url)¶ Check wheather given text is url or not
-
utils.
build_url
(url, query)¶
-
utils.
wps_caps_url
(url)¶
-
utils.
wps_describe_url
(url, identifier)¶
-
utils.
time_ago_in_words
(from_time)¶
-
utils.
root_path
(path)¶
grid
¶
Module Contents¶
-
grid.
get_value
(record, attribute, default=None)¶
-
class
grid.
CustomGrid
(request, *args, **kwargs)¶ -
checkbox_column_format
(column_number, i, record)¶
-
render_td
(renderer, **data)¶
-
label_td
(attribute, default=None)¶
-
time_ago_td
(attribute)¶
-
timestamp_td
(attribute)¶
-
size_td
(attribute)¶
-
userid_td
(attribute)¶
-
user_td
(attribute)¶
-
render_title_td
(title, abstract=None, keywords=list, data=list, format=None, source="#")¶
-
render_flag_td
(flag=False, tooltip="")¶
-
render_format_td
(format, source)¶
-
render_preview_td
(format, source)¶
-
generate_header_link
(column_number, column, label_text)¶ Override of the ObjectGrid to customize the headers. This is mostly taken from the example code in ObjectGrid itself.
-
default_header_column_format
(column_number, column_name, header_label)¶ Override of the ObjectGrid to use <th> for header columns
-
default_header_ordered_column_format
(column_number, column_name, header_label)¶ Override of the ObjectGrid to use <th> and to add an icon that represents the sort order for the column.
-
views
¶
Module Contents¶
-
class
views.
MyView
(request, name, title, description=None)¶
-
views.
notfound
(request)¶ This special view just renders a custom 404 page. We do this so that the 404 page fits nicely into our global layout.
-
views.
add_global
(event)¶
-
views.
unknown_failure
(request, exc)¶
-
views.
favicon_view
(request)¶
-
views.
robotstxt_view
(request)¶
tasks
¶
Submodules¶
tasks.utils
¶
Module Contents¶
-
tasks.utils.
task_result
(task_id)¶
-
tasks.utils.
wait_secs
(run_step=-1)¶
-
tasks.utils.
dump_json
(obj)¶
-
tasks.utils.
save_log
(job, error=None)¶
-
tasks.utils.
add_job
(db, task_id, process_id, title=None, abstract=None, service_name=None, service=None, status_location=None, is_workflow=False, caption=None, userid=None, async=True)¶
-
tasks.utils.
get_access_token
(userid)¶
-
tasks.utils.
wps_headers
(userid)¶
processes
¶
providers
¶
Submodules¶
providers.esgfopenid
¶
Providers which implement the |openid|_ protocol based on the `python-openid`_ library. .. warning:
This providers are dependent on the |pyopenid|_ package.
storage
¶
Submodules¶
storage.views
¶
Module Contents¶
-
storage.views.
download
(request)¶
-
storage.views.
delete
(request)¶ A DELETE request. If found, deletes a file with the corresponding UUID from the servers filesystem.
-
storage.views.
upload
(request)¶
-
storage.views.
handle_delete
(request, uuid)¶ Handles a filesystem delete based on UUID.
-
storage.views.
handle_upload
(request, attrs)¶ Handle a chunked or non-chunked upload.
See example code: https://github.com/FineUploader/server-examples/blob/master/python/flask-fine-uploader/app.py
-
storage.views.
save_chunk
(fs, path)¶ Save an uploaded chunk.
Chunks are stored in chunks/
-
storage.views.
combine_chunks
(total_parts, source_folder, dest)¶ Combine a chunked file into a whole file again. Goes through each part, in order, and appends that part’s bytes to another destination file.
Chunks are stored in chunks/
services
¶
Subpackages¶
wizard
¶
Subpackages¶
wizard.views
¶
Submodules¶
wizard.views.wpsprocess
¶Package Contents¶
-
class
wizard.views.
WizardFavorite
(request, session)¶ Stores wizard state in session with a name (favorite). TODO: implement as a dict?
-
names
()¶
-
get
(name)¶
-
set
(name, state)¶
-
clear
()¶
-
-
class
wizard.views.
WizardState
(session, initial_step="wizard", final_step="wizard_done")¶ -
load
(state)¶
-
dump
()¶
-
current_step
()¶
-
is_first
()¶
-
is_last
()¶
-
next
(step)¶
-
previous
()¶
-
get
(key, default=None)¶
-
set
(key, value)¶
-
clear
()¶
-
-
class
wizard.views.
Wizard
(request, name, title, description=None)¶ -
prev_ok
()¶
-
next_ok
()¶
-
use_ajax
()¶
-
ajax_options
()¶
-
success
(appstruct)¶
-
appstruct
()¶
-
schema
()¶
-
previous_success
(appstruct)¶
-
previous_failure
(validation_failure)¶
-
next_success
(appstruct)¶
-
next_failure
(validation_failure)¶
-
generate_form
(formid="deform")¶
-
process_form
(form, action)¶
-
previous
()¶
-
next
(step, query=None)¶
-
cancel
()¶
-
custom_view
()¶
-
resources
()¶
-
view
()¶
-
cart
¶
Submodules¶
cart.cart
¶
Module Contents¶
-
class
cart.cart.
CartItem
(url, title=None, abstract=None, mime_type=None, dataset=None)¶ -
title
()¶
-
abstract
()¶
-
filename
()¶
-
is_service
()¶
-
is_opendap
()¶
-
is_thredds_catalog
()¶
-
to_json
()¶
-
-
class
cart.cart.
Cart
(request)¶ -
add_item
(url, title=None, abstract=None, mime_type=None)¶ Add cart item.
-
remove_item
(url)¶ Remove cart item with given url.
-
count
()¶ Returns: number of cart items.
-
has_items
()¶ Returns: True if cart items available, otherwise False.
-
clear
()¶ Removes all items of cart and updates session.
-
save
()¶ Store cart items in session.
-
load
()¶ Load cart items from session.
-
to_json
()¶ Returns: json representation of all cart items.
-
tests
¶
Submodules¶
tests.test_esgf_search
¶
tests.test_form
¶
supervisor
¶
Subpackages¶
settings
¶
Subpackages¶
Submodules¶
account
¶
Submodules¶
account.base
¶
Module Contents¶
-
account.base.
forbidden
(request)¶
-
class
account.base.
Account
(request)¶ -
schema
()¶
-
generate_form
()¶
-
process_form
(form)¶
-
_handle_appstruct
(appstruct)¶
-
send_notification
(email, subject, message)¶ Sends email notification to admins.
Sends email with the pyramid_mailer module. For configuration look at documentation http://pythonhosted.org//pyramid_mailer/
-
add_user
(login_id, email=None)¶
-
login
()¶
-
login_success
(login_id, email=None, name=None, openid=None, local=False)¶
-
login_failure
(message=None)¶
-
logout
()¶
-
register
()¶
-
authomatic_login
()¶
-
people
¶
Subpackages¶
people.views
¶
Submodules¶
people.views.actions
¶-
class
people.views.actions.
Actions
(request)¶ -
update_esgf_certs
()¶
-
forget_esgf_certs
()¶
-
generate_twitcher_token
()¶
-
generate_esgf_slcs_token
()¶ Update ESGF slcs token.
-
forget_esgf_slcs_token
()¶ Forget ESGF slcs token.
-
esgf_oauth_callback
()¶ Convert an authorisation grant into an access token.
-
delete_user
()¶
-
-
people.views.actions.
includeme
(config)¶ Pyramid includeme hook. :param config: app config :type config:
pyramid.config.Configurator
Submodules¶
solrsearch
¶
Subpackages¶
geoform
¶
Submodules¶
geoform.form
¶
Module Contents¶
-
class
geoform.form.
BBoxValidator
¶ Bounding-Box validator which succeeds if the bbox value has the format
minx,miny,maxx,maxy
and values are in range (-180 <= x <=180
,-90 <= y <=90
).
-
class
geoform.form.
URLValidator
(allowed_schemes=None)¶ URL validator which can configured with allowed URL schemes.
-
class
geoform.form.
TextValidator
(restricted_chars=None)¶
-
class
geoform.form.
FileUploadValidator
(storage, max_size)¶ Runs all validators for file upload checks.
-
class
geoform.form.
FileFormatAllowedValidator
(storage)¶ File format extension is allowed.
-
class
geoform.form.
FileSizeLimitValidator
(storage, max_size=2)¶ File size limit validator.
You can configure the maximum size by setting the max_size option to the maximum number of megabytes that you want to allow.
geoform.widget
¶
Module Contents¶
-
class
geoform.widget.
ResourceWidget
¶ Renders an WPS ComplexType input widget with a cart and upload button.
It is based on deform.widget.TextInputWidget.
-
serialize
(field, cstruct, **kw)¶
-
deserialize
(field, pstruct)¶
-
-
class
geoform.widget.
BBoxWidget
¶ Renders a BoundingBox Widget.
Attributes/Arguments template
The template name used to render the input widget. Default:bbox
.- readonly_template
- The template name used to render the widget in read-only mode.
Default:
readonly/bbox
.
-
serialize
(field, cstruct, **kw)¶
-
deserialize
(field, pstruct)¶
esgf
¶
Subpackages¶
esgf.views
¶
Submodules¶
Submodules¶
esgf.search
¶
Module Contents¶
-
esgf.search.
date_from_filename
(filename)¶ Example cordex: tas_EUR-44i_ECMWF-ERAINT_evaluation_r1i1p1_HMS-ALADIN52_v1_mon_200101-200812.nc
-
esgf.search.
variable_filter
(constraints, variables)¶ return True if variable fulfills contraints
-
esgf.search.
temporal_filter
(filename, start=None, end=None)¶ return True if file is in timerange start/end
-
esgf.search.
query_params_from_appstruct
(appstruct, defaults)¶
-
esgf.search.
build_constraint_dict
(constraints)¶
-
class
esgf.search.
ESGFSearch
(request, url=None)¶ -
_parse_params
()¶ parse search params.
-
query_params
()¶ search params as string used for query.
-
params
()¶ search params as object.
-
search_items
()¶ search files and aggregations with download url and opendap url.
-
_run_search_items
(dataset_id, search_type)¶
-
search_datasets
()¶ search datasets according to search parameters.
-
esgf.slcsclient
¶
Module Contents¶
-
esgf.slcsclient.
refresh_token
(registry, token, userid)¶
-
esgf.slcsclient.
save_token
(registry, token, userid)¶ Store the token in the database.
-
class
esgf.slcsclient.
ESGFSLCSClient
(request)¶ Redirect the user to the ESGF SLCS Server for authorisation.
-
callback
()¶ Convert an authorisation grant into an access token.
-
refresh_token
()¶
-
get_token
()¶
-
save_token
(token)¶ Store the token in the database.
-
delete_token
()¶ Remove token from database.
-
get_certificate
()¶ Generates a new private key and certificate request, submits the request to be signed by the SLCS CA and prints the resulting key/certificate pair.
Uses automatic refreshing of tokens if they have expired.
monitor
¶
Subpackages¶
monitor.panels
¶
Submodules¶
monitor.views
¶
Submodules¶
monitor.views.actions
¶-
class
monitor.views.actions.
NodeActions
(context, request)¶ Actions related to job monitor.
-
_selected_children
()¶ Get the selected children of the given context.
Result: List with select children. Return type: list
-
restart_job
()¶
-
delete_job
()¶
-
delete_jobs
()¶ Delete selected jobs.
-
delete_all_jobs
()¶
-
make_public
()¶ Make selected jobs public.
-
make_private
()¶ Make selected jobs private.
-
set_favorite
()¶ Set selected jobs as favorite.
-
unset_favorite
()¶ Unset selected jobs as favorite.
-
edit_job
()¶
-
active_jobs
()¶
-
Build the action buttons for the monitor view based on the current state and the permissions of the user.
Result: List of ActionButtons. Return type: list
-
monitor.views.actions.
download_wpsoutputs
(request)¶
-
monitor.views.actions.
includeme
(config)¶ Pyramid includeme hook.
Parameters: config ( pyramid.config.Configurator
) – app config
monitor.views.list
¶-
class
monitor.views.list.
CaptionSchema
¶ This is the form schema to add and edit form for job captions.
-
class
monitor.views.list.
LabelsSchema
¶ This is the form schema to add and edit form for job tags/labels.
-
class
monitor.views.list.
JobList
(request)¶ -
filter_jobs
(page=0, limit=10, tag=None, access=None, status=None, sort="created")¶
-
generate_caption_form
(formid="deform_caption")¶ This helper code generates the form that will be used to add and edit job captions based on the schema of the form.
-
process_caption_form
(form)¶
-
generate_labels_form
(formid="deform_tags")¶ This helper code generates the form that will be used to add and edit job tags/labels based on the schema of the form.
-
process_labels_form
(form)¶
-
view
()¶
-