Skip to content
Snippets Groups Projects
Commit a7a0a7dc authored by Christiane Reinert's avatar Christiane Reinert
Browse files

Update .gitignore, .gitlab-ci.yml, 1_basic_setup.pct.py,...

Update .gitignore, .gitlab-ci.yml, 1_basic_setup.pct.py, 2_creating_new_versions.pct.py, functions_to_modify_ecoinvent.py, index.rst, LICENSE, README.md, requirements.txt, Code_Documentation_git.pdf, data/IEA variable names.xlsx, data/lci-hydro.xlsx, data/iea-region-topolgy.json, doc/conf.py, doc/functions_to_modify_ecoinvent.rst, doc/getting_started.rst, doc/global.rst, doc/make.bat, doc/Makefile files
parents
No related branches found
No related tags found
No related merge requests found
Pipeline #67424 failed
*.ipynb
*.xlsx
*.csv
.ipynb_checkpoints/*
__pycache__/*
doc/_build/*
variables:
SOURCEDIR: "."
DOCDIR: "doc/"
before_script:
- pip install -r requirements.txt
pages:
image: registry.git-ce.rwth-aachen.de/ltt/vorkettenanalyse/doc-build
tags:
- jupytext
- nbsphinx
script:
- sphinx-build -b html $SOURCEDIR -c $DOCDIR public
artifacts:
paths:
- public
only:
- master
stage: deploy
# ---
# jupyter:
# jupytext:
# formats: ipynb,.pct.py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.4.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %% [markdown]
# # Basic Setup
# In this notebook, we create a new project and import ecoinvent 3.5 with the apos system model.
# %% [markdown]
# Check your current version of important python packages. This code was written with Brightway 2.3, bw2io 0.7.11.3 and wurst 0.2.
# %%
import brightway2 as bw
print("brightway2", bw.__version__)
import bw2io
print("bw2io", bw2io.__version__)
import wurst
print("wurst", wurst.__version__)
# %% [markdown]
# ## Choosing the project
# Prepare Brightway2. First, check which projects are available.
# %%
bw.projects.report()
# %% [markdown]
# <div class="alert alert-info">
#
# Note
#
# Change the default project name here. You will also need to specify this project name
# in [2_creating_new_versions.ipynb](2_creating_new_versions.ipynb).
#
# You can choose any of the already existing projects or create your own new project.
# If you are using an already existing project, please make sure that you are not overwriting any important data.
#
# </div>
# %%
bw.projects.set_current("your_project")
bw.databases
# %% [markdown]
# ## Setting up Brightway2
# First, we need to do the basic setup of Brightway2's biosphere database and the predefined methods.
# If the biosphere database exists already, we assume that the setup was already done.
# %%
if "biosphere3" not in bw.databases:
bw.bw2setup()
# %% [markdown]
# Next, we import the ecoinvent database. We use version 3.5 of the apos system model. If you are using a different system model, change the database name accordingly.
#
# <div class="alert alert-info">
#
# Note
#
# You will need to change the path to the ecoinvent datasets folder!
#
# </div>
# %%
filepath = "data/ecoinvent 3.5_apos_ecoSpold02/datasets"
ei = bw.SingleOutputEcospold2Importer(filepath, "ecoinvent3.5apos")
ei.apply_strategies()
ei.statistics()
ei.write_database()
# %% [markdown]
# Check that the database has been created under the specified name:
# %%
bw.databases
# %%
bw.projects.report()
# %% [markdown]
# ## Parsing Scenario data
# Our ecoinvent modifications are based on the predicted technology composition
# (referred to as *scenarios*) from the IEA report
# [Energy Technology Perspectives 2017](https://www.iea.org/reports/energy-technology-perspectives-2017).
# To use the scenarios they provide, we first have to split up the summary table.
# We do this using [pandas](https://pandas.pydata.org/).
# %%
import pandas as pd
import os
filename = "data/ETP2017_scenario_summary.xlsx"
dfs = pd.read_excel(filename, sheet_name=None)
# %% [markdown]
# The imported Excel file contains some sheets that do not contain any data. We delete those here.
# %%
dfs.pop("Information", None)
dfs.pop("Graph", None)
scenarios = {}
# %% [markdown]
# The different scenarios are separated by empty columns and the different sectors are separated by empty rows,
# thus we define a function to help us split up a DataFrame by empty rows/columns.
#
# The functions inputs are a DataFrame ``df`` and a ``val_dict``, that maps the column/row indices of ``df`` to a boolean value.
# It will split ``df`` on the columns/rows that are marked ``False`` in ``val_dict``.
# Optionally, the ``axis`` parameter (default 1) can be used to switch between operations on columns (1) and rows (0).
# %%
def split_by_val(df, val_dict, axis=1):
slc = lambda df, idx: df.loc[:, idx] if axis == 1 else df.iloc[idx]
res = []
cr_idx = []
for i, val in val_dict.items():
if val:
cr_idx.append(i)
elif cr_idx:
res.append(slc(df, cr_idx))
cr_idx = []
if cr_idx:
res.append(slc(df, cr_idx))
return res
# %% [markdown]
# Now we split up all the DataFrames (sheets), first by empty rows and then by empty columns.
# This yields a new DataFrame for every combination of scenario, sector and region.
# We delete unneeded data and columns from those DataFrames and
# transform them to the same format originally used in
# [Mendoza Beltran et al (2018)](https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12825).
# Finally, we save them to a dictionary indexed with a tuple containing the scenario, sector and region.
# %% tags=[]
for country, df in dfs.items():
scen_dfs = split_by_val(df, df.notna().any())
for d in scen_dfs:
name = next(n.split("-") for n in d.columns if "Unnamed" not in n)
name = name[-1].strip()
scens = split_by_val(d, ~df.isna().all(axis="columns"), axis=0)
res = {}
for i in scens[1:]:
i_name = i.iat[0, 0]
i = i.iloc[1:]
i = i.set_index(i.columns[1])
i = i.drop(columns=i.columns[0])
i = i.drop(index=["Other", "Total"], errors="ignore")
i.columns = scens[0].iloc[0].iloc[2:]
i = i.rename_axis(None)
scenarios[(name, i_name, country)] = i.T
# %% [markdown] jupyter={"outputs_hidden": true}
# The resulting DataFrames have a column for every technology listed in the respective sector
# and each row corresponds to the prediction made for a certain year.
#
# We can now export the DataFrames. We create an Excel file for every sector with a sheet
# containing the respective DataFrame for each country.
# %% tags=[]
scen_names = set(n for n, _, _ in scenarios)
sectors = set(s for _, s, _ in scenarios)
countries = set(c for _, _, c in scenarios)
for scen in scen_names:
for sec in sectors:
directory = "data/" + scen + "/"
os.makedirs(directory, exist_ok=True)
fpath = os.path.abspath(directory + sec + ".xlsx")
with pd.ExcelWriter(fpath) as writer:
for country, df in [
(key[2], d)
for key, d in scenarios.items()
if key[0] == scen and key[1] == sec
]:
df.to_excel(writer, sheet_name=country)
# %% [markdown]
# ## Preparing additional LCI
# The additional LCI's provided with Mendoza Beltran, A. et al. (2018) contain a summary sheet
# which conflicts with the Brightway2 ExcelImporter. We now remove those unneeded sheets.
# %%
import openpyxl
for orig, copy in {
"data/5907amb2z_SI_lci-Carma-CCS.xlsx": "data/lci-Carma-CCS.xlsx",
"data/5907amb2z_SI_lci-CSP.xlsx": "data/lci-CSP.xlsx",
}.items():
book = openpyxl.load_workbook(orig)
book.remove(book["Summary"])
book.save(copy)
# %% [markdown]
# ## Generating Sign table
# In ecoinvent's official cumulated matrices, some processes are multiplied with -1.
# Those processes yield a credit in the LCA. To retain this information in our
# exported matrices export, we extract these signs from the official ecoinvent cumulated matrix table.
#
# <div class="alert alert-info">
#
# Note
#
# In the following, change the file path depending on where your copy of ecoinvent's cumulative LCIA is located.
# If the table does not generate properly, it is most likely due to the layout of the cumulative LCIA differing
# from the version we originally used. In this case, you will need to change some of the import parameters in the following cell.
# </div>
# %%
data = pd.read_excel(
"data/v35_apos_cumulative_LCIA.xlsx",
usecols="B:D",
skiprows=[0, 1],
names=["process", "unit", "sign"],
)
# %% [markdown]
# The next step is to match ecoinvent and wurst units. This is necessary because ecoinvent uses acronyms, while wurst uses the full name.
# %%
from functions_to_modify_ecoinvent import unit_ei_to_bw
data["unit"] = data["unit"].apply(lambda x: unit_ei_to_bw.get(x, x))
# %% [markdown]
# Export the sign table.
# %%
data.to_excel("data/ecoinvent_signs.xlsx", index=False)
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# formats: ipynb,.pct.py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.4.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %% [markdown]
# # Modify ecoinvent
#
# Here we do our main analysis, using [functions to modify ecoinvent](doc/functions_to_modify_ecoinvent.rst).
# The modified versions of ecoinvent are based on a dictionary of parameters.
# %% nbsphinx="hidden"
import brightway2 as bw
import pandas as pd
import copy
# %% [markdown]
# <div class="alert alert-info">
#
# Note
#
# The specified project needs to contain an ecoinvent database. Use the same project as in [1_basic_setup.ipynb](1_basic_setup.ipynb).
#
# </div>
# %%
bw.projects.set_current("your_project")
# %% [markdown]
# The functions we use to integrate scenarios in ecoinvent are based on the [wurst](https://github.com/polca/wurst) package
# and are documented in the ["Functions" section](doc/functions_to_modify_ecoinvent.rst).
# %%
from functions_to_modify_ecoinvent import *
# %% [markdown]
# ## Database selection
# After having a look at all available databases...
# %%
bw.databases
# %% [markdown]
# ... we choose an ecoinvent database as a basis for all our newly created ones.
# %%
ecoinvent_db_name = "ecoinvent3.5apos"
# %% [markdown]
# ## Prepare additional datasets
#
# The IEA scenarios that we implement into the database contain some technologies that
# lack corresponding datasets in the ecoinvent database.
# We modeled the missing datasets from literature sources and import them here.
# The literature sources are referred to in Reinert, C. et al. (2020).
#
# We now import all LCIs, that are not already present, from their respective excel files.
# %%
for k, fp in {
"Carma CCS": "data/lci-Carma-CCS.xlsx",
"CSP": "data/lci-CSP.xlsx",
"Hydro": "data/lci-hydro.xlsx",
}.items():
if k not in bw.databases:
sp = bw.ExcelImporter(fp)
sp.apply_strategies()
sp.match_database(fields=["name", "unit", "location"])
sp.match_database(
ecoinvent_db_name, fields=["reference product", "name", "unit", "location"]
)
sp.match_database(ecoinvent_db_name, fields=["name", "unit", "location"])
sp.write_database()
# %% [markdown]
# ## Import new datasets and ecoinvent into wurst format
# %%
input_db = extract_brightway2_databases(
["CSP", "Carma CCS", "Hydro", ecoinvent_db_name]
)
# %% [markdown]
# As some datasets don't have a location specified and some have unset exchange locations,
# we need to fix these inconsistencies. Additionally, we set the location of all new datasets
# to global (as we will later use [wurst](https://github.com/polca/wurst) functionality
# to regionalize some of the data and the function to regionalize activities requires them to have a global location code).
# %%
default_global_location(input_db)
fix_unset_technosphere_and_production_exchange_locations(input_db)
set_global_location_for_additional_datasets(input_db, ecoinvent_db_name)
remove_nones(input_db)
# %% [markdown]
# We are using the [constructive geometries library](https://github.com/cmutel/constructive_geometries)
# (by Chris Mutel). The library contains some naming inconsistencies, as has already been noted in
# [Mendoza Beltran et al. (2018)](https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12825), which we correct:
# %%
rename_locations(input_db, fix_names)
# %% [markdown]
# ## Create regional versions of additional datasets
# Our additional LCI's contain only global versions of each additional dataset.
# However, the IEA data that we are working with is regionalized.
# We use [wurst's](https://github.com/polca/wurst) functionality to make regional copies of the
# new datasets imported from excel and relink all exchanges with regional ones, where available.
# %%
add_new_locations_to_added_datasets_iea(input_db)
regionalize_added_datasets(input_db)
# %% [markdown]
# ## Define scenarios and years to be used
# Here, we set the scenarios and years we want to use when creating a new version of ecoinvent.
# Each entry in the ``database_dict`` will create a database named after the key with the parameters
# supplied as the value. Those parameters are documented within
# [apply transformation's parameters](doc/functions_to_modify_ecoinvent.rst#functions_to_modify_ecoinvent.apply_transformations),
# as we pass them to that function later on.
#
# <div class="alert alert-info">
#
# Note
#
# Adapt these parameters as needed to fit your own reasearch.
#
# </div>
# %%
database_dict = {}
database_dict["ei35_elec_dc_2016"] = {
"year": 2016,
"scenario": "2DS",
"change_elec": True,
"change_dc": True,
}
for year in range(2020, 2051, 5):
database_dict["ei35_elec_dc_" + str(year)] = {
"year": year,
"scenario": "2DS",
"change_elec": True,
"change_dc": True,
}
# %% [markdown]
# Now we do the main processing for every defined set of parameters:
#
# 1. Creating a copy of the database to perform the calculations on.
#
# 2. Applying all the transformations specified in the parameters dictionary to this database copy.
#
# 3. Deleting databases of the same name that are already present in our Brightway2 project,
# as we assume these to be leftover from previous runs of the code.
# <div class="alert alert-warning">
#
# Make sure that this step does not delete important results!
# </div>
#
# 4. Reverting all of the location naming changes that were necessary due to the constructive geometries library.
#
# 5. Fixing some possibly mismatched locations.
#
# 6. Exporting the new database to Brightway2.
#
# 7. Performing an LCA for all available processes and exporting to .xlsx for further use,
# for example in our research as input of energy systems models.
# If you just need the Brightway2 database, you can deactivate this step, as it is quite computationally intensive.
#
# %%
for db_name, parameters in database_dict.items():
db = copy.deepcopy(input_db)
apply_transformations(db, **parameters)
## delete db if existent
if db_name in bw.databases:
del bw.databases[db_name]
## export db
rename_locations(db, fix_names_back)
link_fix_mismatched_locations(db)
write_brightway2_database(db, db_name)
## excel export
excel_export(db_name)
File added
LICENSE 0 → 100644
BSD 3-Clause License
Copyright (c) 2020, Chair of Technical Thermodynamics, RWTH Aachen University
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Code documentation
Our documentation is available
[here](https://ltt.pages.git-ce.rwth-aachen.de/vorkettenanalyse/).
File added
{
"Russia": [
"RU"
],
"China": [
"CN",
"HK"
],
"Mexico": [
"MX"
],
"India": [
"IN"
],
"European Union": [
"BG",
"CY",
"CZ",
"EE",
"HR",
"HU",
"LT",
"LV",
"PL",
"RO",
"SI",
"SK",
"AT",
"BE",
"DE",
"DK",
"ES",
"FI",
"FR",
"GB",
"GR",
"IE",
"IT",
"LU",
"MT",
"NL",
"PT",
"SE"
],
"South Africa": [
"ZA"
],
"Brazil": [
"BR"
],
"United States": [
"US"
],
"ASEAN": [
"BN",
"KH",
"ID",
"LA",
"MM",
"MY",
"PH",
"SG",
"TH",
"VN"
],
"OECD": [
"CA",
"CL",
"JP",
"KR",
"IL",
"AU",
"NZ",
"IS",
"NO",
"CH",
"TR"
],
"NonOECD": [
"AF",
"AL",
"DZ",
"AS",
"AO",
"AR",
"AM",
"AZ",
"BD",
"BB",
"BY",
"BZ",
"BJ",
"BT",
"BO",
"BA",
"BW",
"BF",
"BI",
"CM",
"CV",
"CF",
"TD",
"CL",
"CO",
"KM",
"CG",
"CD",
"CR",
"CI",
"CU",
"DJ",
"DM",
"DO",
"EC",
"EG",
"SV",
"GQ",
"ER",
"EE",
"ET",
"FJ",
"GA",
"GM",
"GE",
"GH",
"GD",
"GT",
"GN",
"GW",
"GY",
"HT",
"HN",
"IR",
"IQ",
"JM",
"JO",
"KZ",
"KE",
"KI",
"KG",
"LB",
"LS",
"LR",
"LY",
"MK",
"MG",
"MW",
"MV",
"ML",
"MH",
"MR",
"MU",
"YT",
"FM",
"MD",
"MN",
"MA",
"MZ",
"NA",
"NP",
"NI",
"NE",
"NG",
"MP",
"OM",
"PK",
"PW",
"PA",
"PG",
"PY",
"PE",
"RW",
"WS",
"ST",
"SN",
"CS",
"SC",
"SL",
"SB",
"SO",
"LK",
"KN",
"LC",
"VC",
"SD",
"SR",
"SZ",
"SY",
"TJ",
"TZ",
"TL",
"TG",
"TO",
"TT",
"TN",
"TR",
"TM",
"UG",
"UY",
"UZ",
"VU",
"VE",
"YE",
"ZM",
"ZW",
"SA",
"GI",
"TW",
"UA",
"AE",
"KW",
"CW",
"KP",
"XK",
"QA",
"BH",
"SS"
]
}
\ No newline at end of file
File added
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -c .
SPHINXBUILD ?= sphinx-build
SOURCEDIR = ..
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
# Configuration file for the Sphinx documentation builder.
# -- Path setup -------------------------------------------------------------
import os
import sys
sys.path.insert(0, os.path.abspath(".."))
# -- Project information ----------------------------------------------------
project = "Creating new versions of ecoinvent"
copyright = "2020, Chair of Technical Thermodynamics, RWTH Aachen University"
author = "Chair of Technical Thermodynamics, RWTH Aachen University"
# The full version, including alpha/beta/rc tags
release = ""
# latex_elements = {"releasename": ""} is in latex options
# -- General configuration --------------------------------------------------
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
"nbsphinx",
"IPython.sphinxext.ipython_console_highlighting",
"IPython.sphinxext.ipython_directive",
]
exclude_patterns = [
"_build",
"Thumbs.db",
".DS_Store",
"**.ipynb_checkpoints",
"*.ipynb",
"doc/global.rst",
]
rst_prolog = open("global.rst", "r").read()
# -- Options for HTML output ------------------------------------------------
html_theme = "sphinx_rtd_theme"
html_theme_path = [
"_themes",
]
# -- Options for LaTeX output -----------------------------------------------
latex_engine = "xelatex"
latex_elements = {
"inputenc": "",
"utf8extra": "",
"releasename": "",
"preamble": r"""
\usepackage{fontspec}
\setsansfont{Arial}
\setromanfont{Arial}
\setmonofont{DejaVu Sans Mono}
""",
}
# -- Extension Options ------------------------------------------------------
# nbsphinx
nbsphinx_execute = "never"
nbsphinx_custom_formats = {
".pct.py": ["jupytext.reads", {"fmt": "py:percent"}],
}
nbsphinx_prolog = r"""
{% set docname = env.doc2path(env.docname, base=None) %}
.. raw:: html
<div class="admonition note">
<p>This page was generated from {{ docname }}.
</p>
</div>
"""
Functions
=========
.. py:currentmodule:: functions_to_modify_ecoinvent
As the code is largely based on Brightway2 and wurst,
we refer to all types of processes as *activities*
and to the flows between processes as *exchanges*,
to remain consistent with the terminology of these libraries.
Input parameters of many of the following functions are activities or a database of activities.
Activities and databases are always expected to be in the `data format specified in the wurst library
<https://wurst.readthedocs.io/#internal-data-format>`__.
Electricity activities
----------------------
.. autofunction:: update_electricity_markets_iea
.. autofunction:: add_new_datasets_to_electricity_market_iea
.. autofunction:: delete_electricity_inputs_from_market
.. autofunction:: add_new_locations_to_added_datasets_iea
.. autofunction:: regionalize_added_datasets
Searching
^^^^^^^^^
All functions reffering to geographical intersection internally use
constructive geometry's ``Gemoatcher.intersects`` functionality,
which is documented `as part of the docs of constructive_geometries
<https://constructive-geometries.readthedocs.io/?badge=latest#constructive_geometries.geomatcher.Geomatcher.intersects>`_.
.. autofunction:: find_ecoinvent_elec_ds_in_all_locations_iea
.. autofunction:: find_ecoinvent_elec_ds_in_iea_region
.. autofunction:: find_ecoinvent_elec_ds_in_same_ecoinvent_location_iea
.. autofunction:: find_other_ecoinvent_regions_in_iea_region
.. autofunction:: ecoinvent_to_iea_locations
Scenario handling
-----------------
.. autofunction:: interpolate_linear
.. autofunction:: find_empty_columns
.. autofunction:: apply_transformations
.. autofunction:: find_average_mix
.. autofunction:: get_iea_markets
Database Utilities
------------------
.. autofunction:: fix_unset_technosphere_and_production_exchange_locations
.. autofunction:: set_global_location_for_additional_datasets
.. autofunction:: remove_nones
.. autofunction:: link_fix_mismatched_locations
.. autofunction:: rename_locations
.. autofunction:: get_exchange_amounts
.. autodata:: unit_ei_to_bw
Double Counting
---------------
The issue of double counting only arises under certain circumstances.
In our case, it was due to the later use of the processed data in a
model for a sector-coupled energy system in Germany.
Thus, ``get_double_counted_process_keys`` is specific to German electricity
and heat production, but can easily be adapted.
.. autofunction:: get_double_counted_process_keys
.. autofunction:: set_dc_process_inputs_to_zero
Export
------
.. autofunction:: get_sign_mapping
.. autofunction:: excel_export
Getting Started
===============
Required Software
-----------------
Python 3.3 or greater
This code has been developed and tested with both
Anaconda Python and CPython but will work under any Python implementation.
Packages
* ``jupyter``
* ``pandas``
* ``brightway2``
* ``wurst``
* ``xlsxwriter``
Optional
^^^^^^^^
Jupytext
We used jupytext_ to version control our code. If you downloaded the code
directly from our repository, you will need to run
.. code-block:: bash
jupytext --sync *.pct.py
in the main directory, to convert the script files into Jupyter Notebook,
which you can the open in the web interface.
``functions_to_modify_ecoinvent.py`` can also be viewed as a Jupyter Notebook
using jupytext.
.. _jupytext: https://github.com/mwouts/jupytext
Datasets
--------
We need a variety of additional datasets
apart from the LCI already provided in the ``data`` directory.
Please also place all of these in the ``data`` directory or change the filepaths in
1_basic_setup_ and 2_creating_new_versions_ according to where you saved them.
Ecoinvent Dataset
Our analysis was done using the ecoinvent 3.5 apos dataset.
As discussed in our paper,
our additional LCI datasets provided in ``data/lci-hydro.xlsx``
are also based on this version.
Additionally, you will need the cumulated matrices provided by ecoinvent for
your respective version.
Technology Predictions
We modify ecoinvent based on technology composition mix predictions made by
the IEA in |IEA_Report|_.
You will need to retrieve ``Scenario data data files (zip)``
from the Downloads section of the linked webpage and unpack it,
so that it can be processed in 1_basic_setup_.
Additional LCI
As most of our research expands on |Mendoza_cit|_, we also need to import
the LCI provided in the supporting information, downloadable from the linked
webpage.
First Steps
-----------
First, we import the ecoinvent data into a Brightway2 database. All
the required steps are done in 1_basic_setup_.
.. _1_basic_setup: 1_basic_setup.ipynb
.. _2_creating_new_versions: 2_creating_new_versions.ipynb
.. Citations
.. |IEA_Report| replace:: *Energy Technology Perspectives 2017*
.. _IEA_Report: https://www.iea.org/reports/energy-technology-perspectives-2017
.. |Mendoza| replace:: *When the Background Matters\: Using Scenarios from Integrated Assessment Models in Prospective Life Cycle Assessment*
.. _Mendoza: https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12825
.. |Mendoza_cit| replace:: Mendoza Beltran, A. et al. (2018)
.. _Mendoza_cit: Mendoza_
.. |Reinert| replace:: *Environmental Impacts of the Future German Energy System from Integrated Energy Systems Optimization and Dynamic Life Cycle Assessment*
.. |Reinert_cit| replace:: Reinert, C. et al. (2021)
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd
This diff is collapsed.
Code documentation
==============================================================
This code is part of the supporting information of |Reinert|, by |Reinert_cit|.
It is heavily based on the code supplied with |Mendoza|_, by |Mendoza_cit|.
To construct the modified ecoinvent databases from |Reinert_cit| we used
some additional LCI datasets, IEA predictions for future technology compositions
and the ecoinvent database itself. The preparation of the data is outlined
in `Basic Setup <1_basic_setup.ipynb>`_ and the process to modify ecoinvent
is documented in `Creating new Versions <2_creating_new_versions.ipynb>`_.
This documentation was last updated on 14.12.2020.
.. toctree::
:caption: Contents:
doc/getting_started
Basic Setup <../1_basic_setup>
Creating new versions <../2_creating_new_versions>
doc/functions_to_modify_ecoinvent.rst
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment