Extend a Simulation with Python
Overview
WarpX’s Python bindings let you integrate Python code directly into a WarpX simulation. Through this interface, you can access and modify simulation data – such as particle properties, field values – as the simulation runs. This versatility opens the door to a wide range of workflows, including:
Adding a custom physics module (for instance, a specific collision model) that may not yet be available in WarpX’s C++ implementation, and that can be quickly implemented in Python.
Coupling WarpX with another simulation tool that has a Python interface, enabling both codes to operate on the same particle or field data.
Incorporating AI-based surrogate models built in Python (e.g., with PyTorch or TensorFlow) to emulate complex physical processes.
If your custom Python code uses high-performance, GPU-accelerated libraries – such as cupy, pytorch, or numba – the extra computations are unlikely to significantly impact simulation speed. Note that WarpX’s Python bindings provide direct access to particle and field data without creating copies, resulting in very low overhead.
How to run a simulation with Python extensions
Install WarpX with support for the Python interface: for instance, if you compile WarpX from source, this involves using
-DWarpX_PYTHON=ON.Write a Python script that extends the simulation: this can be done starting from a simulation defined either with a parameter list or with the PICMI Python interface. The Python script typically contains callback functions that access/modify the simulation data (see the sections below for more details).
When starting from a parameter list, write a Python script that loads the parameter list file using the load_inputs_file() method:
from pywarpx import warpx
sim = warpx
sim.load_inputs_file("./inputs_test_3d_laser_acceleration")
# register callbacks ...
# advance simulation until the last time step
sim.step()
Full Example
Examples/Physics_applications/laser_acceleration/inputs_test_3d_laser_acceleration_python.py and it requires the files inputs_test_3d_laser_acceleration and inputs_base_3d from the same folder.#!/usr/bin/env python3
#
# Starting from an inputs file, define a WarpX simulation
# and extend it with Python logic.
from pywarpx import warpx
from pywarpx.callbacks import callfromafterstep
sim = warpx
sim.load_inputs_file("./inputs_test_3d_laser_acceleration")
# Optional: Define callbacks, e.g., after every step
@callfromafterstep
def my_simple_callback():
"""This simple callback uses particle container and MultiFab objects,
https://warpx.readthedocs.io/en/latest/usage/workflows/python_extend.html#particles
and
https://warpx.readthedocs.io/en/latest/usage/workflows/python_extend.html#fields
"""
print(" my_simple_callback")
# electrons: access (and potentially manipulate)
electrons = sim.particles.get("electrons")
print(f" {electrons}")
# electric field: access (and potentially manipulate)
Ex = sim.fields.get("Efield_fp", dir="x", level=0)
print(f" {Ex}")
@callfromafterstep
def my_advanced_callback():
"""This callback dives deeper using pyAMReX methods and data containers directly.
https://pyamrex.readthedocs.io/en/latest/usage/compute.html
"""
print(" my_advanced_callback")
# the pyAMReX module
amr = sim.extension.amr
amr.Print(f" {amr.ParallelDescriptor.NProcs()} MPI process(es) active")
# electrons: access (and potentially manipulate)
electrons = sim.particles.get("electrons")
print(f" {electrons}")
# electric field: access (and potentially manipulate)
Ex_mf = sim.fields.get("Efield_fp", dir="x", level=0)
print(f" {Ex_mf}")
# Advance simulation until the last time step
sim.step()
When starting from a PICMI Python script, simply add the Python code that extends the simulation to this script, before the call to step().
# Preparation: set up the simulation
# sim = picmi.Simulation(...)
# ...
# register callbacks ...
sim.step(nsteps=1000)
Then, run the simulation by executing the Python script: for instance using
mpirunorsrunon an HPC system.
mpirun -np <n_ranks> python <python_script>
Callback Functions
Installing callback functions will execute a given Python function at a specific location in the WarpX simulation loop.
Callback Locations
These are the functions which allow installing user created functions so that they are called at various places along the time step.
The following three functions allow the user to install, uninstall and verify the different call back types.
installcallback(): Installs a function to be called at that specified timeuninstallcallback(): Uninstalls the function (so it won’t be called anymore)isinstalled(): Checks if the function is installed
These functions all take a callback location name (string) and function or instance method as an argument. Note that if an instance method is used, an extra reference to the method’s object is saved.
Functions can be called at the following times:
loadExternalFields: duringWarpX::LoadExternalFieldsto writeB/Efield_fp_externalvaluesbeforeInitEsolve: before the initial solve for the E fields (i.e. before the PIC loop starts)afterInitEsolve: after the initial solve for the E fields (i.e. before the PIC loop starts)afterinit: immediately after the init is completebeforeEsolve: before the solve for E fields (not called during init E solve, use beforeInitEsolve to apply to first solve)poissonsolver: In place of the computePhi call but only in an electrostatic simulationafterEsolve: after the solve for E fields (not called after init E solve, use afterInitEsolve to apply to first solve)afterBpush: after the B field advance for electromagnetic solversafterEpush: after the E field advance for electromagnetic solversbeforedeposition: before the particle deposition (for charge and/or current)afterdeposition: after particle deposition (for charge and/or current)beforestep: before the time stepafterstep: after the time stepafterdiagnostics: after diagnostic outputoncheckpointsignal: on a checkpoint signalonbreaksignal: on a break signal. These callbacks will be the last ones executed before the simulation ends.particlescraper: just after the particle boundary conditions are applied but before lost particles are processedparticleloader: at the time that the standard particle loader is calledparticleinjection: called when particle injection happens, after the position advance and before deposition is called, allowing a user defined particle distribution to be injected each time step
Example that calls the Python function myplots after each step:
from pywarpx.callbacks import installcallback
def myplots():
# do something here
installcallback('afterstep', myplots)
# run simulation
sim.step(nsteps=100)
The install can also be done using a Python decorator, which has the prefix callfrom.
To use a decorator, the syntax is as follows. This will install the function myplots to be called after each step.
The above example is quivalent to the following:
from pywarpx.callbacks import callfromafterstep
@callfromafterstep
def myplots():
# do something here
# run simulation
sim.step(nsteps=100)
- pywarpx.callbacks.installcallback(name, f)
Installs a function to be called at that specified time.
Adds a function to the list of functions called by this callback.
- pywarpx.callbacks.isinstalled(name, f)
Checks if a function is installed for this callback.
- pywarpx.callbacks.uninstallcallback(name, f)
Uninstalls the function (so it won’t be called anymore).
Removes the function from the list of functions called by this callback.
pyAMReX
The Python interface to WarpX is provided through pyAMReX. After the simulation is initialized, the pyAMReX module can be accessed (if needed) via
from pywarpx import picmi, libwarpx
# ... simulation definition ...
# equivalent to
# import amrex.space3d as amr
# for a 3D simulation
amr = libwarpx.amr # picks the right 1d, 2d or 3d variant
Full details for pyAMReX APIs are documented here. The major objects used in the WarpX interface will be of types defined by pyAMReX. Important APIs include:
amr.ParallelDescriptor: MPI-parallel rank information
amr.MultiFab: MPI-parallel field data
amr.ParticleContainer_*: MPI-parallel particle data for a particle species
Data Access
While the simulation is running, the user will have read and write access the WarpX simulation data in situ, for example to be used in callbacks.
An important object in the pywarpx.picmi module for data access is Simulation.extension.warpx, which is available only during the simulation run.
This object is the Python equivalent to the C++ WarpX simulation class.
- class pywarpx.callbacks.WarpX
- getistep(lev: int)
Get the current step on mesh-refinement level
lev.
- gett_new(lev: int)
Get the current physical time on mesh-refinement level
lev.
- getdt(lev: int)
Get the current physical time step size on mesh-refinement level
lev.
- multi_particle_container()
- get_particle_boundary_buffer()
- set_potential_on_domain_boundary(potential_[lo/hi]_[x/y/z]: str)
The potential on the domain boundaries can be modified when using the electrostatic solver. This function updates the strings and function parsers which set the domain boundary potentials during the Poisson solve.
- set_potential_on_eb(potential: str)
The embedded boundary (EB) conditions can be modified when using the electrostatic solver. This set the EB potential string and updates the function parser.
- evolve(numsteps=-1)
Evolve the simulation the specified number of steps.
- step(numsteps=-1)
An alias to the evolve method.
- finalize(finalize_mpi=1)
Call finalize for WarpX and AMReX. Registered to run at program exit.
The WarpX also provides read and write access to field MultiFab and ParticleContainer data, shown in the following examples.
Fields
All of the data on the grids can be accessed, with each field returned as a MultiFab instance.
This callback example accesses the \(Ex(x,y,z)\) field at level 0 after every time step and sets all of the values to 42.
This shows how to loop over levels and grid blocks.
from pywarpx import picmi
from pywarpx.callbacks import callfromafterstep
# Preparation: set up the simulation
# sim = picmi.Simulation(...)
# ...
@callfromafterstep
def set_Ex():
warpx = sim.extension.warpx
# data access
# vector field E, component x, on the fine patch of MR level 0
Ex_mf = sim.fields.get("Efield_fp", dir=0, level=0)
# scalar field rho, on the fine patch of MR level 0
rho_mf = sim.fields.get("rho_fp", level=0)
# compute on Ex_mf
# iterate over mesh-refinement levels
for lev in range(warpx.finest_level + 1):
# grow (aka guard/ghost/halo) regions
ngv = Ex_mf.n_grow_vect
# get every local block of the field
for mfi in Ex_mf:
# global index space box, including guards
bx = mfi.tilebox().grow(ngv)
print(bx) # note: global index space of this block
# numpy/cupy representation: non-copying view, including
# the guard/ghost region
Ex = Ex_mf.array(mfi).to_xp()
# notes on indexing in Ex:
# - numpy/cupy use locally zero-based indexing
# - layout is F_CONTIGUOUS by default, just like AMReX
# notes:
# Only the next lines are the "HOT LOOP" of the computation.
# For efficiency, we use array operation for speed.
Ex[()] = 42.0
sim.step(nsteps=100)
The physical fields in WarpX have the following naming convention:
_fpare the “fine” patches, the regular resolution of a current mesh-refinement level_auxare temporary (auxiliary) patches at the same resolution as_fp. Depending on the algorithms being used, they can be averaged spatially or include contributions from other levels. This will be the fields that will be interpolated to the particles._cpare “coarse” patches, at the same resolution (but not necessary values) as the_fpoflevel - 1(only for level 1 and higher).
For further details on how to access GPU data or compute on Ex, please see the pyAMReX documentation.
Various operations can be done using the MultiFab objects. For example, to find the maximum value, use Ex.max(), and to multiply the data by a factor, Ex.mult(2.).
The field MultiFab object provides access to the data via global indexing.
Using standard array indexing with square brackets, the data can be accessed using indices that are relative to the full domain (across the MultiFab and across processors).
When the data is fetched the result is a numpy array that contains a copy of the data, and when using multiple processors is broadcast to all processors (and is a global operation).
For indices within the domain, values from valid cells are always returned.
The ghost cells at the exterior of the domain are accessed using imaginary numbers, with negative values accessing the lower ghost cells, and positive the upper ghost cells.
This example will return the Bz field at all valid interior points along x at the specified y and z indices.
Bz = sim.fields.get("Bfield_fp", dir=2, level=0)
Bz_along_x = Bz[:,5,6]
The same global indexing can be done to set values. This example will set the values over a range in y and z at the
specified x. The data will be scattered appropriately to the underlying FABs. The set is a local operation.
Jy = sim.fields.get("current_fp", dir=1, level=0)
Jy[5,6:20,8:30] = 7.
In this example, seven is added to all of the values along x, including both valid and ghost cells (specified by using the empty tuple, ()), the first ghost cell at the lower boundary in y, and the last valid cell and first upper ghost cell in z.
Note that the += will be a global operation.
Jx = sim.fields.get("current_fp", dir=0, level=0)
Jx[(),-1j,-1:2j] += 7.
To fetch the data from all of the valid cells of all dimensions, the ellipsis can be used, Jx[...].
Similarly, to fetch all of the data including valid cells and ghost cells, use an empty tuple, Jx[()].
The code does error checking to ensure that the specified indices are within the bounds of the global domain.
New MultiFabs can be created at the Python level and added to the registry. Using this method, the new MultiFabs will be handled in the same way as internal MultiFabs, for example that data can be redistributed during load balancing (when the flags are set as shpwn in the example). In this example, a new MultiFab is added with the same properties as Ex.
Ex = sim.fields.get("Efield_fp", dir=0, level=0)
normalized_Ex = sim.fields.alloc_init(name="normalized_Ex",
dir=0,
level=0,
ba=Ex.box_array(),
dm=Ex.dm(),
ncomp=Ex.n_comp,
ngrow=Ex.n_grow_vect,
initial_value=0.,
redistribute=True,
redistribute_on_remake=True)
Particles
from pywarpx import picmi
from pywarpx.callbacks import callfromafterstep
# Preparation: set up the simulation
# sim = picmi.Simulation(...)
# ...
@callfromafterstep
def my_after_step_callback():
warpx = sim.extension.warpx
Config = sim.extension.Config
# data access
multi_pc = warpx.multi_particle_container()
pc = multi_pc.get_particle_container_from_name("electrons")
# compute
# iterate over mesh-refinement levels
for lvl in range(pc.finest_level + 1):
# get every local chunk of particles
for pti in pc.iterator(pc, level=lvl):
# compile-time and runtime attributes in SoA format
soa = pti.soa().to_cupy() if Config.have_gpu else \
pti.soa().to_numpy()
# notes:
# Only the next lines are the "HOT LOOP" of the computation.
# For speed, use array operation.
# write to all particles in the chunk
# note: careful, if you change particle positions, you might need to
# redistribute particles before continuing the simulation step
soa.real[0][()] = 0.30 # x
soa.real[1][()] = 0.35 # y
soa.real[2][()] = 0.40 # z
# all other attributes: weight, momentum x, y, z, ...
for soa_real in soa.real[3:]:
soa_real[()] = 42.0
# by default empty unless ionization or QED physics is used
# or other runtime attributes were added manually
for soa_int in soa.int:
soa_int[()] = 12
sim.step(nsteps=100)
For further details on how to access GPU data or compute on electrons, please see the pyAMReX documentation.
High-Level Particle Wrapper
Note
TODO: What are the benefits of using the high-level wrapper? TODO: What are the limitations (e.g., in memory usage or compute scalability) of using the high-level wrapper?
Particles can be added to the simulation at specific positions and with specific attribute values:
from pywarpx import particle_containers, picmi
# ...
electron_wrapper = particle_containers.ParticleContainerWrapper("electrons")
- class pywarpx.particle_containers.ParticleContainerWrapper(species_name)
Wrapper around particle containers. This provides a convenient way to query and set data in the particle containers.
- Parameters:
species_name (string) – The name of the species to be accessed.
- add_particles(x=None, y=None, z=None, ux=None, uy=None, uz=None, w=None, unique_particles=True, **kwargs)
A function for adding particles to the WarpX simulation.
- Parameters:
species_name (str) – The type of species for which particles will be added
x (arrays or scalars) – The particle positions (m) (default = 0.)
y (arrays or scalars) – The particle positions (m) (default = 0.)
z (arrays or scalars) – The particle positions (m) (default = 0.)
ux (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)
uy (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)
uz (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)
w (array or scalars) – Particle weights (default = 0.)
unique_particles (bool) – True means the added particles are duplicated by each process; False means the number of added particles is independent of the number of processes (default = True)
kwargs (dict) – Containing an entry for all the extra particle attribute arrays. If an attribute is not given it will be set to 0.
- add_real_comp(pid_name, comm=True)
Add a real component to the particle data array.
- Parameters:
pid_name (str) – Name that can be used to identify the new component
comm (bool) – Should the component be communicated
- deposit_charge_density(level, clear_rho=True, sync_rho=True)
Deposit this species’ charge density in rho_fp in order to access that data via pywarpx.fields.RhoFPWrapper().
- Parameters:
species_name (str) – The species name that will be deposited.
level (int) – Which AMR level to retrieve scraped particle data from.
clear_rho (bool) – If True, zero out rho_fp before deposition.
sync_rho (bool) – If True, perform MPI exchange and properly set boundary cells for rho_fp.
- get_particle_count(local=False)
Get the number of particles of this species in the simulation.
- Parameters:
local (bool) – If True the particle count on this processor will be returned. Default False.
- Returns:
An integer count of the number of particles
- Return type:
int
- get_particle_cpu(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘cpu’ numbers on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle cpus
- Return type:
List of arrays
- get_particle_id(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘id’ numbers on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle ids
- Return type:
List of arrays
- get_particle_idcpu(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘idcpu’ numbers on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle idcpu
- Return type:
List of arrays
- get_particle_idcpu_arrays(level, copy_to_host=False)
This returns a list of numpy or cupy arrays containing the particle idcpu data on each tile for this process.
Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle array data
- Return type:
List of arrays
- get_particle_int_arrays(comp_name, level, copy_to_host=False)
This returns a list of numpy or cupy arrays containing the particle int array data on each tile for this process.
Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.
- Parameters:
comp_name (str) – The component of the array data that will be returned
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle array data
- Return type:
List of arrays
- get_particle_r(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘r’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle r position
- Return type:
List of arrays
- get_particle_real_arrays(comp_name, level, copy_to_host=False)
This returns a list of numpy or cupy arrays containing the particle real array data on each tile for this process.
Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.
- Parameters:
comp_name (str) – The component of the array data that will be returned
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle array data
- Return type:
List of arrays
- get_particle_theta(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle theta on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle theta position
- Return type:
List of arrays
- get_particle_ux(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle x momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle x momentum
- Return type:
List of arrays
- get_particle_uy(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle y momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle y momentum
- Return type:
List of arrays
- get_particle_uz(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle z momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle z momentum
- Return type:
List of arrays
- get_particle_weight(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle weight on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle weight
- Return type:
List of arrays
- get_particle_x(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘x’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle x position
- Return type:
List of arrays
- get_particle_y(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘y’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle y position
- Return type:
List of arrays
- get_particle_z(level=0, copy_to_host=False)
Return a list of numpy or cupy arrays containing the particle ‘z’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle z position
- Return type:
List of arrays
- get_species_charge_sum(local=False)
Returns the total charge in the simulation due to the given species.
- Parameters:
local (bool) – If True return total charge per processor
- get_species_energy_sum(local=False)
Returns the total kinetic energy in the simulation due to the given species.
- Parameters:
local (bool) – If True return total energy per processor
- property idcpu
Return a list of numpy or cupy arrays containing the particle ‘idcpu’ numbers on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle idcpu
- Return type:
List of arrays
- property nps
Get the number of particles of this species in the simulation.
- Parameters:
local (bool) – If True the particle count on this processor will be returned. Default False.
- Returns:
An integer count of the number of particles
- Return type:
int
- property rp
Return a list of numpy or cupy arrays containing the particle ‘r’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle r position
- Return type:
List of arrays
- property thetap
Return a list of numpy or cupy arrays containing the particle theta on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle theta position
- Return type:
List of arrays
- property uxp
Return a list of numpy or cupy arrays containing the particle x momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle x momentum
- Return type:
List of arrays
- property uyp
Return a list of numpy or cupy arrays containing the particle y momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle y momentum
- Return type:
List of arrays
- property uzp
Return a list of numpy or cupy arrays containing the particle z momentum on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle z momentum
- Return type:
List of arrays
- property wp
Return a list of numpy or cupy arrays containing the particle weight on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle weight
- Return type:
List of arrays
- property xp
Return a list of numpy or cupy arrays containing the particle ‘x’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle x position
- Return type:
List of arrays
- property yp
Return a list of numpy or cupy arrays containing the particle ‘y’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle y position
- Return type:
List of arrays
- property zp
Return a list of numpy or cupy arrays containing the particle ‘z’ positions on each tile.
- Parameters:
level (int) – The refinement level to reference (default=0)
copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.
- Returns:
The requested particle z position
- Return type:
List of arrays
The get_particle_real_arrays(), get_particle_int_arrays() and
get_particle_idcpu_arrays() functions are called
by several utility functions of the form get_particle_{comp_name} where
comp_name is one of x, y, z, r, theta, id, cpu,
weight, ux, uy or uz.
Diagnostics
Various diagnostics are also accessible from Python.
This includes getting the deposited or total charge density from a given species as well as accessing the scraped particle buffer.
See the example in Examples/Tests/ParticleBoundaryScrape for a reference on how to interact with scraped particle data.
- class pywarpx.particle_containers.ParticleBoundaryBufferWrapper
Wrapper around particle boundary buffer containers. This provides a convenient way to query data in the particle boundary buffer containers.
- clear_buffer()
Clear the buffer that holds the particles lost at the boundaries.
- get_particle_boundary_buffer(species_name, boundary, comp_name, level)
This returns a list of numpy or cupy arrays containing the particle array data for a species that has been scraped by a specific simulation boundary.
The data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.
You can find here https://github.com/BLAST-WarpX/warpx/blob/319e55b10ad4f7c71b84a4fb21afbafe1f5b65c2/Examples/Tests/particle_boundary_interaction/PICMI_inputs_rz.py an example of a simple case of particle-boundary interaction (reflection).
- Parameters:
species_name (str) – The species name that the data will be returned for.
boundary (str) – The boundary from which to get the scraped particle data in the form x/y/z_hi/lo or eb.
comp_name (str) – The component of the array data that will be returned. “x”, “y”, “z”, “ux”, “uy”, “uz”, “w” “stepScraped”,”deltaTimeScraped”, if boundary=’eb’: “nx”, “ny”, “nz”
level (int) – Which AMR level to retrieve scraped particle data from.
- get_particle_boundary_buffer_size(species_name, boundary, local=False)
This returns the number of particles that have been scraped so far in the simulation from the specified boundary and of the specified species.
- Parameters:
species_name (str) – Return the number of scraped particles of this species
boundary (str) – The boundary from which to get the scraped particle data in the form x/y/z_hi/lo
local (bool) – Whether to only return the number of particles in the current processor’s buffer
- get_particle_scraped_this_step(species_name, boundary, comp_name, level)
This returns a list of numpy or cupy arrays containing the particle array data for particles that have been scraped at the current timestep, for a specific species and simulation boundary.
The data for the arrays is a view of the underlying boundary buffer in WarpX ; writing to these arrays will therefore also modify the underlying boundary buffer.
- Parameters:
species_name (str) – The species name that the data will be returned for.
boundary (str) – The boundary from which to get the scraped particle data in the form x/y/z_hi/lo or eb.
comp_name (str) – The component of the array data that will be returned. “x”, “y”, “z”, “ux”, “uy”, “uz”, “w” “stepScraped”,”deltaTimeScraped”, if boundary=’eb’: “nx”, “ny”, “nz”
level (int) – Which AMR level to retrieve scraped particle data from.
Modify Solvers
From Python, one can also replace numerical solvers in the PIC loop or add new physical processes into the time step loop. Examples:
Capacitive Discharge: replaces the Poisson solver of an electrostatic simulation (default: MLMG) with a python function that uses superLU to directly solve the Poisson equation.