openLCA Jython Tutorial

openLCA is a Java application and, thus, runs on the Java Virtual Machine (JVM). Jython is a Python 2.7 implementation that runs on the JVM. It compiles Python code to Java bytecode which is then executed on the JVM. The final release of Jython 2.7 is bundled with openLCA. Under Window > Developer tools > Python you can find a small Python editor where you can write and execute Python scripts:

Open the Python editor

In order to execute a script, you click on the Run button in the toolbar of the Python editor:

Run a script in openLCA

The script is executed in the same Java process as openLCA. Thus, you have access to all the things that you can do with openLCA via this scripting API (and also to everything that you can do with the Java and Jython runtime). Here is a small example script that will show the information dialog below when you execute it in openLCA:

from org.openlca.app.util import UI, Dialog
from org.openlca.app import App

def say_hello():
    Dialog.showInfo(UI.shell(), 'Hello from Python (Jython)!')

if __name__ == '__main__':
    App.runInUI('say hello', say_hello)

Hello from Jython

Relation to standard Python

As said above, Jython runs on the JVM. It implements a great part of the Python 2.7 standard library for the JVM. For example the following script will work when you set the file path to a valid path on your system:

import csv

with open('path/to/file.csv', 'w') as stream:
    writer = csv.writer(stream)
    writer.writerow(["data you", "may want", "to export",])

The Jython standard library is extracted to the python folder of the openLCA workspace which is by default located in your user directory ~/openLCA-data-1.4/python. This is also the location in which you can put your own Jython 2.7 compatible modules. For example, when you create a file tutorial.py with the following function in this folder:

# ~/openLCA-data-1.4/python/tutorial.py
def the_answer():
  f = lambda s, x: s + x if x % 2 == 0 else s
  return reduce(f, range(0, 14))

You can then load it in the openLCA script editor:

import tutorial
import org.openlca.app.util.MsgBox as MsgBox

MsgBox.info('The answer is %s!' % tutorial.the_answer())

An important thing to note is that Python modules that use C-extensions (like NumPy and friends) or parts of the standard library that are not implemented in Jython are not compatible with Jython. If you want to interact from standard CPython with openLCA (using Pandas, NumPy, etc.) you can use the openLCA-IPC Python API.

The openLCA API

As said above, with Jython you directly access the openLCA Java API. In Jython, you interact with a Java class in the same way as with a Python class. The openLCA API starts with a set of classes that describe the basic data model, like Flow, Process, ProductSystem. You can find these classes in the olca-module repository.

...

Using Visualization APIs

The following example shows how visualization APIs can be used from the openLCA Python API. In the example, all output amounts of Emission to air/unspecified/Chromium VI are collected from a database, transformed with f(x) = log10(x * 1e15) to make a nice distribution, and shown in a histogram using the Google Chart API. Therefore, an HTML page is generated that is loaded in a JavaFX WebView in a separate window.

With ecoinvent 3.3 (apos), the result looks like this:

Here is the full Python code:

import json
import math

from javafx.embed.swt import FXCanvas
from org.eclipse.swt.widgets import Display, Shell
from org.eclipse.swt import SWT
from org.eclipse.swt.layout import FillLayout
from org.openlca.core.database import NativeSql, FlowDao
from org.openlca.app.util import UI


def get_flow():
    """ Get the flow `Emission to air / unspecified / Chromium VI` from the
        database.
    """
    flows = FlowDao(db).getForName('Chromium VI')
    for flow in flows:
        c = flow.category
        if c is None or c.name != 'unspecified':
            continue
        c = c.category
        if c is None or c.name != 'Emission to air':
            continue
        return flow


def get_results():
    """ Get the values for the flow from the process inputs and outputs and
        transform them: f(x) = log10(x * 1e15).
    """

    def collect_results(record):
        results.append([math.log10(record.getDouble(1) * 1e15)])
        return True

    chrom6 = get_flow()
    log.info(chrom6.name)
    results = [['Chromium VI']]
    query = 'select resulting_amount_value from tbl_exchanges where f_flow = %i' % chrom6.id
    NativeSql.on(db).query(query, collect_results)
    log.info('{} results collected', len(results))
    return results


def make_html(results):
    """ Generate the HTML page for the data. """

    html = '''<html>
    <head>
        <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>
        <script type="text/javascript">
        google.charts.load("current", {packages:["corechart"]});
        google.charts.setOnLoadCallback(drawChart);
        function drawChart() {
            var data = google.visualization.arrayToDataTable(%s);
            var options = {
                title: 'Chromium VI',
                legend: { position: 'none' },
                hAxis: {
                    ticks: [0, 2, 4, 6, 8, 10, 12, 14]
                }
            };
            var chart = new google.visualization.Histogram(
                document.getElementById('chart_div'));
            chart.draw(data, options);
        }
        </script>
    </head>
    <body>
        <div id="chart_div" style="width: 900px; height: 500px;"></div>
    </body>
    </html>
    ''' % json.dumps(results)
    return html


def main():
    """ Create the results, HTML, and window with the WebView and set the HTML
        content of the WebView.
    """
    results = get_results()
    html = make_html(results)
    shell = Shell(Display.getDefault())
    shell.setText('Chromium VI')
    shell.setSize(800, 600)
    shell.setLayout(FillLayout())
    canvas = FXCanvas(shell, SWT.NONE)
    web_view = UI.createWebView(canvas)
    web_view.loadContent(html)
    shell.open()

if __name__ == '__main__':
    Display.getDefault().asyncExec(main)

The openLCA data model

The basic data model of openLCA is defined in the package org.openlca.core.model of the olca-core module. When you work with data in openLCA you will usually interact with the types of this package. In this section, we describe the basic data types and how they are related to each other. Each type is basically a Java class which you can access like a normal Python class from Jython. However, for the description of the data model we will use a simple notation where each type has a name and a set of properties which again have a type:

type TypeName {
    property      TypeOfProperty
    listProperty  List[TypeOfItems]
}

Note that we will not describe all types of the openLCA model and we will focus on the most important properties.

The basic inventory model

The openLCA data model is built around a basic inventory model which has the following components:

In this model, processes are the basic building blocks that describe the production of a material or energy, treatment of waste, provision of a service, etc. Each process has a set of exchanges that contain the inputs and outputs of flows like products, wastes, resources, and emissions of that process. The product flows (since openLCA 1.7 also waste flows) can be linked in a product system to specify the supply chain of a product or service - the functional unit of that product system. Such product systems are then used to calculate inventory and impact assessment results.

Units and unit groups

All quantitative amounts of the inputs and outputs in a process have a unit of measurement. In openLCA convertible units are organized in groups that have a reference unit to which the conversion factors of the units are related:

type Unit {
    name              String
    conversionFactor  double
    ...
}

type UnitGroup {
    name           String
    referenceUnit  Unit
    units          List[Unit]
    ...
}

Units and unit groups can be created in the following way:

import org.openlca.core.model as model

kg = model.Unit()
kg.name = 'kg'
kg.conversionFactor = 1.0

unitsOfMass = model.UnitGroup()
unitsOfMass.name = 'Units of mass'
unitsOfMass.referenceUnit = kg
unitsOfMass.units.add(kg)

Flows and flow properties

Flows are the things that are moved around as inputs and outputs (exchanges) of processes. When a process produces electricity and another process consumes electricity from the first process both processes will have an output exchange and input exchange with a reference to the same flow. The basic type definition of a flow looks like this:

type Flow {
    name                   String
    flowType               FlowType
    referenceFlowProperty  FlowProperty
    flowPropertyFactors    List[FlowPropertyFactor]
    ...
}

The flow type

The flowType property indicates whether the flow is a product, waste, or elementary flow. Product flows (and waste flows starting from openLCA 1.7) can link inputs and outputs of processes (like electricity) in a product system where elementary flows (like CO2) are the emissions and resources of the processes. Basically, in the calculation the flow type is used to decide whether to put an exchange amount into the technology matrix $A$ or the intervention matrix $B$ (see also the calculation section).

The type FlowType is an enumeration type with the following values: PRODUCT_FLOW, ELEMENTARY_FLOW, or WASTE_FLOW. When you create a flow, you can set the flow type in the following way:

import org.openlca.core.model as model

f = model.Flow()
f.flowType = model.FlowType.PRODUCT_FLOW
f.name = 'Liquid aluminium'

Flow properties

A flow in openLCA has physical properties (like mass or volume), called flow properties, in which the amount of a flow in a process exchange can be specified:

type FlowProperty {
    name              String
    flowPropertyType  FlowPropertyType
    unitGroup         UnitGroup
    ...
}

Like the FlowType the FlowPropertyType is an enumeration type and can have the following values: PHYSICAL and ECONOMIC. The flow property type is basically only used when physical and economic allocation factors of a process are calculating automatically. With this, a flow property can be created in the following way:

mass = model.FlowProperty()
mass.flowPropertyType = model.FlowPropertyType.PHYSICAL
mass.unitGroup = unitsOfMass

For a flow, all flow properties need to be convertible by a factor which is defined by the type FlowPropertyFactor:

type FlowPropertyFactor {
    conversionFactor  double
    flowProperty      FlowProperty
}

These conversion factors are related to the reference flow property (referenceFlowProperty) of the flow:

f.referenceFlowProperty = mass
massFactor = model.FlowPropertyFactor()
massFactor.conversionFactor = 1.0
massFactor.flowProperty = mass
f.flowPropertyFactors.add(massFactor)

Processes

A process describes the inputs and outputs (exchanges) related to a quantitative reference which is typically the output product of the process:

type Process {
    name                   String
    quantitativeReference  Exchange
    exchanges              List[Exchange]
    ...
}

An input or output is described by the type Exchange in openLCA:

type Exchange {
    input        boolean
    flow         Flow
    unit         Unit
    amountValue  double
    ...
}

The Boolean property input indicates whether the exchange is an input (True) or not (False). Each exchange has a flow (like steel or CO2), unit, and amount but also a flow property factor which indicates the physical quantity of the amount (not that there are different physical quantities that can have the same unit). The following example shows how we can create a process:

import org.openlca.core.model as model

p = model.Process()
p.name = 'Aluminium smelting'
output = model.Exchange()  # liquid aluminium
output.input = False
output.amountValue = 1000  # kg
p.exchanges.add(output)
p.quantitativeReference = output

Setting up an Integrated Development Environment

The integrated Python editor in openLCA is nice if you want to quickly write and execute small scripts directly in openLCA. However, if you want to do something more complicated it is better to use an editor with advanced features like code formatting, auto-completion, etc. This chapter explains how you can setup the integrated development environment (IDE) PyDev to use it with the openLCA API.

Installing Java and Jython

As described in the previous chapters, openLCA is a standard Java desktop application. To access the openLCA API we use Jython which is a Python implementation that runs on the Java Virtual Machine (JVM) and is directly integrated in openLCA. Thus, if we want to access the openLCA API outside of openLCA we need to first install a Java Runtime Environment (JRE) >= 8 and Jython.

To install the JRE, just go to the Oracle download site, accept the license, and download the respective installation package for your platform (take the x64 package if you have a 64 bit computer and the x86 package if you have a 32 bit computer):

Java installation packages

To test if Java is correctly installed, just open a command line and execute the following command:

java -version

This should return something like this:

java version "1.8.0_101"
Java(TM) SE Runtime Environment ...

After this, we can download and run the Jython installer which is also a Java application. In the installation wizard, we select the standard installation type and an arbitrary folder, e.g. ~/openlca/jython_ide/jython_2.7:

Jython installation dialog

To test the installation, you can run the jython executable in the jython_2.7/bin folder which will open a standard Python REPL.

Installing PyDev

PyDev is a Python IDE for Eclipse with Jython support. To use it with Jython, we need Eclipse with Java development tools and the easiest way to get this is to start with an Eclipse installation. Thus, download the Eclipse IDE for Java Developers and extract it to a folder (e.g. ~/openlca/jython_ide/eclipse):

Eclipse download

Start the Eclipse executable and create a workspace, e.g. under ~/openlca/jython_ide/workspace (this is just a folder where your projects are stored). Now we can install PyDev via the menu Help > Install New Software.... In the installation dialog click on Add... to register the PyDev update site http://www.pydev.org/updates:

PyDev update site

Then select the PyDev package, accept the license, install it, and restart Eclipse:

PyDev update site

After the restart, you can configure the Jython interpreter under Window > Preferences and select the jython.jar from your Jython installation (e.g. ~/openlca/jython_ide/jython_2.7/jython.jar):

PyDev update site

Using the openLCA API

Now you can create a new PyDev project under File > New > Project... > PyDev Project. You just need to give it a name and select Jython as interpreter:

Create a project

When you now create a script, you should be able to run it directly with the Jython interpreter:

Run a script

When you print somthing on the console, you may get the following error:

console: Failed to install '': java.nio.charset.UnsupportedCharsetException: cp0.

This is related to this Jython issue which you could just ignore. To fix this, you can set the following parameter under Run -> Run Configurations -> Arguments -> VM Arguments:

-Dpython.console.encoding=UTF-8

To use the openLCA API in the project, right click on the project and open the project Properties. Click on the PyDev PYTHONPATH and in the External Libraries tab on the button Add zip/jar/egg button. Then select all jar files in the openlca/plugins/olca-app_<version>/libs folder of an openLCA installation you want to use:

Select the openLCA libraries

Now you should be able to use all the IDE features of PyDev like auto-completion etc.:

Auto-complete feature

Logging

openLCA uses SLF4J over Log4j for logging.

log4j.rootLogger=INFO, A1
log4j.logger.org.openlca=INFO
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

Examples

Create a unit and unit group

from org.openlca.core.database.derby import DerbyDatabase
from java.io import File
import org.openlca.core.model as model
from org.openlca.core.database import UnitGroupDao, FlowPropertyDao
from java.util import UUID

# path to our database
folder = 'C:/Users/Besitzer/openLCA-data-1.4/databases/example_db1'
db = DerbyDatabase(File(folder))

# unit and unit group
kg = model.Unit()
kg.name = 'kg'
kg.conversionFactor = 1.0

mass_units = model.UnitGroup()
mass_units.name = 'Units of mass'
mass_units.units.add(kg)
mass_units.referenceUnit = kg
mass_units.refId = UUID.randomUUID().toString()

# create a data access object and insert it in the database
dao = UnitGroupDao(db)
dao.insert(mass_units)

Create a flow property

mass = model.FlowProperty()
mass.name = 'Mass'
mass.unitGroup = mass_units
mass.flowPropertyType = model.FlowPropertyType.PHYSICAL
fpDao = FlowPropertyDao(db)
fpDao.insert(mass)

Create a flow with category

category = model.Category()
category.refId = UUID.randomUUID().toString();
category.name = 'products'
category.modelType = model.ModelType.FLOW
CategoryDao(db).insert(category)

flow = model.Flow()
flow.name = 'Steel'
flow.category = category
flow.referenceFlowProperty = mass

fp_factor = FlowPropertyFactor()
fp_factor.flowProperty = mass
fp_factor.conversionFactor = 1.0
flow.flowPropertyFactors.add(fp_factor)
FlowDao(db).insert(flow)

Update a flow

flow = util.find_or_create(db, model.Flow, 'Steel', create_flow)
flow.description = 'My first flow ' + str(Date())
flow = util.update(db, flow)

Create generic database functions

from org.openlca.core.database import Daos

def insert(db, value):
    Daos.createBaseDao(db, value.getClass()).insert(value)

def delete_all(db, clazz):
    dao = Daos.createBaseDao(db, clazz)
    dao.deleteAll()

def find(db, clazz, name):
    """ Find something by name"""
    dao = Daos.createBaseDao(db, clazz)
    for item in dao.getAll():
        if item.name == name:
            return item

Create a process

process = model.Process()
process.name = 'Steel production'

steel_output = model.Exchange()
steel_output.input = False
steel_output.flow = flow
steel_output.unit = kg
steel_output.amountValue = 1.0
steel_output.flowPropertyFactor = flow.getReferenceFactor()

process.exchanges.add(steel_output)
process.quantitativeReference = steel_output

util.insert(db, process)

Update a process

from org.openlca.core.database.derby import DerbyDatabase as Db
from java.io import File
from org.openlca.core.database import ProcessDao

if __name__ == '__main__':
    db_dir = File('C:/Users/Besitzer/openLCA-data-1.4/databases/openlca_lcia_methods_1_5_5')
    db = Db(db_dir)
    
    dao = ProcessDao(db)
    p = dao.getForName("p1")[0]
    p.description = 'Test 123'
    dao.update(p)
    
    db.close()

Insert a new parameter

  • create a parameter and insert it into a database
from org.openlca.core.database.derby import DerbyDatabase
from org.openlca.core.model import Parameter, ParameterScope
from java.io import File
from org.openlca.core.database import ParameterDao

if __name__ == '__main__':
    param = Parameter()
    param.scope = ParameterScope.GLOBAL
    param.name = 'k_B'
    param.inputParameter = True
    param.value = 1.38064852e-23
    
    db_dir = File('C:/Users/Besitzer/openLCA-data-1.4/databases/ztest')
    db = DerbyDatabase(db_dir)
    dao = ParameterDao(db)
    dao.insert(param)
    db.close()

Update a parameter

from java.io import File
from org.openlca.core.database.derby import DerbyDatabase
from org.openlca.core.database import ParameterDao

if __name__ == '__main__':
    db_dir = File('C:/Users/Besitzer/openLCA-data-1.4/databases/ztest')
    db = DerbyDatabase(db_dir)
    dao = ParameterDao(db)
    param = dao.getForName('k_B')[0]
    param.value = 42.0    
    dao.update(param)
    db.close()

Run a calculation

  • connect to a database and load a product system
  • load the optimized, native libraries
  • calculate and print the result
from java.io import File
from org.openlca.core.database.derby import DerbyDatabase
from org.openlca.core.database import ProductSystemDao, EntityCache
from org.openlca.core.matrix.cache import MatrixCache
from org.openlca.eigen import NativeLibrary
from org.openlca.eigen.solvers import DenseSolver
from org.openlca.core.math import CalculationSetup, SystemCalculator
from org.openlca.core.results import ContributionResultProvider

if __name__ == '__main__':
    # load the product system
    db_dir = File('C:/Users/Besitzer/openLCA-data-1.4/databases/ei_3_3_apos_dbv4')
    db = DerbyDatabase(db_dir)
    dao = ProductSystemDao(db)
    system = dao.getForName('rice production')[0]
    
    # caches, native lib., solver
    m_cache = MatrixCache.createLazy(db)
    e_cache = EntityCache.create(db)
    NativeLibrary.loadFromDir(File('../native'))
    solver = DenseSolver()
    
    # calculation
    setup = CalculationSetup(system)
    calculator = SystemCalculator(m_cache, solver)
    result = calculator.calculateContributions(setup)
    provider = ContributionResultProvider(result, e_cache)
    
    for flow in provider.flowDescriptors:
        print flow.getName(), provider.getTotalFlowResult(flow).value
    
    db.close()

Using the formula interpreter

from org.openlca.expressions import FormulaInterpreter

if __name__ == '__main__':
    fi = FormulaInterpreter()
    gs = fi.getGlobalScope()
    gs.bind('a', '1+1')
    ls = fi.createScope(1)
    print ls.eval('2*a')

Get weighting results

from java.io import File
from org.openlca.core.database.derby import DerbyDatabase
from org.openlca.core.database import ProductSystemDao, EntityCache,\
    ImpactMethodDao, NwSetDao
from org.openlca.core.matrix.cache import MatrixCache
from org.openlca.eigen import NativeLibrary
from org.openlca.eigen.solvers import DenseSolver
from org.openlca.core.math import CalculationSetup, SystemCalculator
from org.openlca.core.model.descriptors import Descriptors
from org.openlca.core.results import ContributionResultProvider
from org.openlca.core.matrix import NwSetTable


if __name__ == '__main__':
    # load the product system
    db_dir = File('C:/Users/Besitzer/openLCA-data-1.4/databases/openlca_lcia_methods_1_5_5')
    db = DerbyDatabase(db_dir)
    dao = ProductSystemDao(db)
    system = dao.getForName('s1')[0]
    
    # caches, native lib., solver
    m_cache = MatrixCache.createLazy(db)
    e_cache = EntityCache.create(db)
    NativeLibrary.loadFromDir(File('../native'))
    solver = DenseSolver()
    
    # calculation
    setup = CalculationSetup(system)
    setup.withCosts = True
    method_dao = ImpactMethodDao(db)
    setup.impactMethod = Descriptors.toDescriptor(method_dao.getForName('eco-indicator 99 (E)')[0])
    nwset_dao = NwSetDao(db)
    setup.nwSet = Descriptors.toDescriptor(nwset_dao.getForName('Europe EI 99 E/E [person/year]')[0])
    calculator = SystemCalculator(m_cache, solver)
    result = calculator.calculateContributions(setup)
    provider = ContributionResultProvider(result, e_cache)
    
    for i in provider.getTotalImpactResults():
        if i.value != 0:
            print i.impactCategory.name, i.value
    
    # weighting
    nw_table = NwSetTable.build(db, setup.nwSet.id)    
    weighted = nw_table.applyWeighting(provider.getTotalImpactResults())
    for i in weighted:
        if i.value != 0:
            print i.impactCategory.name, i.value
    
    print provider.totalCostResult
    db.close()
    

Using the sequential solver

import org.openlca.core.math.CalculationSetup as Setup
import org.openlca.app.db.Cache as Cache
import org.openlca.core.math.SystemCalculator as Calculator
import org.openlca.core.results.ContributionResultProvider as Provider
import org.openlca.app.results.ResultEditorInput as EditorInput
import org.openlca.eigen.solvers.SequentialSolver as Solver
import org.openlca.app.util.Editors as Editors

solver = Solver(1e-12, 1000000)
solver.setBreak(0, 1)
system = olca.getSystem('preparation')
setup = Setup(system)
calculator = Calculator(Cache.getMatrixCache(), solver)
result = calculator.calculateContributions(setup)
provider = Provider(result, Cache.getEntityCache())
input = EditorInput.create(setup, provider)
Editors.open(input, "QuickResultEditor")

for i in solver.iterations:
  print i

Using SQL

from org.openlca.core.database.derby import DerbyDatabase
from java.io import File
from org.openlca.core.database import NativeSql

folder = 'C:/Users/Besitzer/openLCA-data-1.4/databases/example_db1'
db = DerbyDatabase(File(folder))

query = 'select * from tbl_unit_groups'

def fn(r):
    print r.getString('REF_ID')
    return True

# see http://greendelta.github.io/olca-modules/olca-core/apidocs/org/openlca/core/database/NativeSql.html
NativeSql.on(db).query(query, fn)

Create a location with KML data

The example below creates a location with KML data and stores it in the database.

import org.openlca.util.BinUtils as BinUtils
import org.openlca.core.database.LocationDao as Dao

loc = Location()
loc.name = 'Points'
loc.code = 'POINTS'

kml = '''
<kml xmlns="http://www.opengis.net/kml/2.2">
  <Placemark>
    <MultiGeometry>
      <Point>
        <coordinates>5.0,7.5</coordinates>
      </Point>
      <Point>
        <coordinates>5.0,2.5</coordinates>
      </Point>
      <Point>
        <coordinates>15.0,5.0</coordinates>
      </Point>
    </MultiGeometry>
  </Placemark>
</kml>
'''.strip()

loc.kmz = BinUtils.zip(kml)
dao = Dao(db)
dao.insert(loc)

Export used elementary flows to a CSV file

The following script exports the elementary flows that are used in processes of the currently activated databases into a CSV file.

import csv
from org.openlca.core.database import FlowDao, NativeSql
from org.openlca.util import Categories


# set the path to the resulting CSV file here
CSV_FILE = 'C:/Users/ms/Desktop/used_elem_flows.csv'

def main():
    global db

    # collect the IDs of the used elementary flows
    # via an SQL query
    ids = set()
    sql = '''
    SELECT DISTINCT f.id FROM tbl_flows f
      INNER JOIN tbl_exchanges e ON f.id = e.f_flow
      WHERE f.flow_type = 'ELEMENTARY_FLOW'
    '''

    def collect_ids(r):
        ids.add(r.getLong(1))
        return True

    NativeSql.on(db).query(sql, collect_ids)

    # load the flows and write them to a CSV file
    flows = FlowDao(db).getForIds(ids)
    with open(CSV_FILE, 'wb') as f:
        writer = csv.writer(f, delimiter=',')
        writer.writerow([
            'Ref. ID', 'Name', 'Category', 'Ref. Flow property', 'Ref. Unit'
        ])

        for flow in flows:
            writer.writerow([
                flow.refId,
                flow.name,
                '/'.join(Categories.path(flow.category)),
                flow.referenceFlowProperty.name,
                flow.referenceFlowProperty.unitGroup.referenceUnit.name
            ])

if __name__ == '__main__':
    main()

Calculate a product system with many different parameter values that are read from a csv file, and store the results in Excel

This script uses the IPC server of openLCA for connecting via Python to openLCA.

It calculates all product systems existing in the selected database with an LCIA method from this database that is identified by name. The calculations are done for all parameter sets in the csv file.

To use it, adapt the LCIA method name and the paths to the csv file and to the Excel file. Create the csv file with UUID of parameter, name of parameter, then the parameter values in different columns, one per set. You can also extend the number of parameters sets (in the script and, correspondingly, in the csv file). Then, start the IPC server in openLCA with port 8080. Execute the script in an external Python IDE such as PyCharm or similar.

import csv
import os
import sys

import olca
import pandas
import arrow as ar


def main():

    # the Excel files with the results are written to the `output` folder
    if not os.path.exists('output'):
        os.makedirs('output')

    start = ar.now()
    # make sure that you started an IPC server with the specific database in
    # openLCA (Window > Developer Tools > IPC Server)
    client = olca.Client(8080)

    # first we read the parameter sets; they are stored in a data frame where
    # each column is a different parameter set
	# 1st column: parameter UUID
	# 2nd column: parameter name
	# last column: process name, for documentation
    parameters = read_parameters(
        'relative/path/to/csvfile.csv')

    # we prepare a calculation setup for the given LCIA method and reuse it
    # for the different product systems in the database
    calculation_setup = prepare_setup(client, 'The Name of the LCIA method')

    # we run a calculation for each combination of parameter set and product
    # system that is in the database
    for system in client.get_descriptors(olca.ProductSystem):
        print('Run calculations for product system %s (%s)' %
              (system.name, system.id))
        calculation_setup.product_system = system
        for parameter_set in range(0, parameters.shape[1]):
            set_parameters(calculation_setup, parameters, parameter_set)

            try:
                calc_start = ar.now()
                print('  . run calculation for parameter set %i' % parameter_set)
                result = client.calculate(calculation_setup)
                print('  . calculation finished in', ar.now() - calc_start)

                # we store the Excel file under
                # `output/<system id>_<parameter set>.xlsx`
                excel_file = 'output/%s_%d.xlsx' % (system.id, parameter_set)
                export_and_dispose(client, result, excel_file)

            except Exception as e:
                print('  . calculation failed: %s' % e)

    print('All done; total runtime', ar.now() - start)


def read_parameters(file_path: 'file\path') -> pandas.DataFrame:
    """ Read the given parameter table into a pandas data frame where the
        parameter names are mapped to the index.
	    assumption: not more than 5 sets - if there are more, the row index and the csv file can be changed
    """
    index = []
    data = []
    with open(file_path, 'r', encoding='cp1252') as stream:
        reader = csv.reader(stream, delimiter=';')
        rows = []
        for row in reader:
            index.append(row[1])
            data.append([float(x) for x in row[2:7]])
        return pandas.DataFrame(data=data, index=index)


def prepare_setup(client: olca.Client, method_name: str) -> olca.CalculationSetup:
    """ Prepare the calculation setup with the LCIA method with the given name.
        Note that this is just an example. You can of course get a method by
        ID, calculate a system with all LCIA methods in the database etc.
    """
    method = client.find(olca.ImpactMethod, method_name)
    if method is None:
        sys.exit('Could not find LCIA method %s' % method_name)
    setup = olca.CalculationSetup()
    # currently, simple calculation, contribution analysis, and upstream
    # analysis are supported
    setup.calculation_type = olca.CalculationType.CONTRIBUTION_ANALYSIS
    setup.impact_method = method
    # amount is the amount of the functional unit (fu) of the system that
    # should be used in the calculation; unit, flow property, etc. of the fu
    # can be also defined; by default openLCA will take the settings of the
    # reference flow of the product system
    setup.amount = 1.0
    return setup


def set_parameters(setup: olca.CalculationSetup, parameters: pandas.DataFrame,
                   parameter_set: int):
    """ Set the parameters of the given parameter set (which is the
        corresponding column in the data frame) to the calculation setup.
    """
    # for each parameter in the parameter set we add a parameter
    # redefinition in the calculation setup which will set the parameter
    # value for the respective parameter just for the calculation (without
    # needing to modify the database)
    setup.parameter_redefs = []
    for param in parameters.index:
        redef = olca.ParameterRedef()
        redef.name = param
        redef.value = parameters.ix[param, parameter_set]
        setup.parameter_redefs.append(redef)


def export_and_dispose(client: olca.Client, result: olca.SimpleResult, path: str):
    """ Export the given result to Excel and dispose it after the Export
        finished.
    """
    try:
        print('  . export result to', path)
        start = ar.now()
        client.excel_export(result, path)
        time = ar.now() - start
        print('  . export finished after', time)
        print('  . dispose result')
        client.dispose(result)
        print('  . done')
    except Exception as e:
        print('ERROR: Excel export or dispose of %s failed' % path)


if __name__ == '__main__':
    main()

The csv file can look as follows, one line per parameter:

484895ce-a443-4cc4-8864-a10a407e93aa;para1;0.014652146;0.014718301;0.017931926;0.020983646;0.020427708;;;processname 1 8fc4c8a5-8e98-46b7-9a21-d1d8481e5aa4;para2;0.030556766;0.034429313;0.044245529;0.047132534;0.049762465;;;processname 2 66a0200f-ce81-494c-8131-1aee0c0a59f2;NP3;0.016611061;0.010960006;0.005260319;0.002222134;0.00099458;;;processname 3 410ed118-4201-47d3-88bd-da6af3bfd555;NP4;0;0.00017588;0.000994777;0.001486167;0.001806687;;;processname 4 247b42d0-9715-465f-9500-61fc51bbfe84;NP4;0.015608129;0.018301197;0.017860159;0.016049407;0.015421637;;;processname 5 f90551c7-3c92-474f-9d10-03562ed8c8ae;NP5;0.008727944;0.010233887;0.009987262;0.008974703;0.008623659;;;processname 6 19063103-a7b5-4f61-9a3a-c3ec44133c67;NP6;0.005293286;0.006206604;0.006057032;0.005442939;0.00523004;;;processname 7