MESA code notes


Relax method to start wtih an arbitrary initial abundance profile
new names for atomic mass
Overshooting
General
Structure of code
Transfer Failed Models
What we found so far
what to do to the work directory if there is a mesa update
dbg options
create_zahb_model
Opacities
sugimoto betas
parallel OpenMP
variables for the convection zone boundaries
grid refinement
initial composition
starting models
solar model, mixing length
Tester
logxxx.data output
network
mesh refinement control
Order of execution
making ZAMS models

Relax method to start wtih an arbitrary initial abundance profile

On Oct 19, 2011, at 7:45 PM, Falk Herwig wrote:

> Hi Bill, I would like to do some tests by imposing an arbitrary
> initial abundance profile onto the star, instead of assuming Have you
> done something like this with MESA?  Basically, what I used to do in
> EVOL was to relax from a homogenous model to a target distribution,
> taking into account energy generation but not the abundance changes
> due to burning. Can you give me a starting point?

Hi,

See if this still works -- it has been a long time since I tried it.  



     ! relax composition
        relax_initial_composition = .false.
        num_steps_to_relax_composition = 100 ! use this many steps to do conversion
        relax_composition_filename = '' ! file holding composition profile information
           ! file format for relax composition
           ! 1st line: num_species  num_points
           ! then 1 line for for each point where define desired composition
           ! xq xa(1) ... xa(num_species)
              ! xq = fraction of xmstar exterior to the point
              ! where xmstar = mstar - M_center
              ! xa(i) = mass fraction of i'th species
           ! NOTE: it is up to you to ensure that the current net isotopes match
           ! the species in the composition file.
           ! you can set show_net_species_info = .true. to check the isotopes in the net.



Cheers,
Bill

new names for atomic mass


On Jan 4, 2012, at 2:40 PM, Falk Herwig wrote:

And how are these now called:

../public/se_support.f(145): error 6460: This is not a field name that is defined in the encompassing structure.   [A]
          iso_mass(i)        = chem_isos% A(myiso) 
--------------------------------------------^
../public/se_support.f(145): error 6158: The structure-name is invalid or is missing.   [CHEM_ISOS]
          iso_mass(i)        = chem_isos% A(myiso) 
---------------------------------^




Do you want atomic weight (non-integer) or baryon number (integer)?

"A" was being used for baryon number, but to me it seems "A" should be "atomic weight".
So I just decided to make everyone unhappy by changing things as follows (see chem_def)

double precision, dimension(:), pointer :: W ! atomic weight
integer, dimension(:), pointer :: Z! number of protons
integer, dimension(:), pointer :: N! number of neutrons
integer, dimension(:), pointer :: Z_plus_N! number of baryons

Overshooting


pax 1762 has the new controls Falk wants.   Aaron, you might try reintegrating with it.


        ! the switch from convective mixing to overshooting happens
        ! at a distance f0*Hp from the estimated location where grad_ad == grad_rad;
        ! where Hp is the pressure scale height at that location.
        ! a value < 0 tells the system to use the value of f for f0.
        overshoot_f0_above_nonburn = -1
        overshoot_f0_below_nonburn = -1
        overshoot_f0_above_burn_h = -1
        overshoot_f0_below_burn_h = -1
        overshoot_f0_above_burn_non_h = -1
        overshoot_f0_below_burn_non_h = -1






On Sep 28, 2009, at 8:15 AM, Falk Herwig wrote:


On Sep 28, 2009, at 7:40 AM, Bill Paxton wrote:

Overshooting in star now uses the new sub-cell scheme for r0, D0, Hp.
	     find subcell point where grada=gradr.
	     calculate r and Hp at that location
	     move back into convective zone by f*Hp
	     calculate r0 and D0 at that location
	     switch to overshooting starting at that location (so outer f*Hp changes from convective to overshooting)
	     we may need to add another parameter in place of f to determine how far back to go.

General

input deck: inlist
restart: ./re x300 - name from photos directory

Structure of code


In terms of understanding the implementation, you might want
to start with the files in mesa/star/public.  The 'star_def.f' file
defines the basic data structure for a star.  The file 'star_data.dek'
has the data about the model, while 'star_controls.dek' declares
the control parameters.  The default values and comments about
the controls are in 'star_defaults.dek'.  Finally, 'star_lib.f' has
the procedures that users of the modules should call.  The actual
code that does the work is called from here; sources are in
mesa/star/private.

The code in mesa/star/test/src/run_star.f is another place you
might want to start looking.  It is the top-level routine that
calls functions in star_lib to drive the evolution.

Transfer Failed Models

This is what you do if you have a model that has some problems or other special feature that you want to send to someone to reproduce your finding:
Open the inlist for the run with the failure.

Edit the 'save_model_number' control to save a model after the last
'photo' before the crash.  For example, the run died at 4639, and it
you're saving photos every 10th model, you should have one for 4630.
In that case, edit the inlist to have

  save_model_number = 4631
  save_model_filename = 'problem.mod'

Then restart

     ./re x630

Stop it after it saves 'problem.mod'.

Send the inlist and problem.mod.

What we found so far

  1. for hight accuracy we need sugimoto beta = .5, for better stability we need 0 or 1
  2. the self-made PMS starting models are not good, for some reason? use the pre-manufactured models
  3. Bill added more time-step control to make sure the L_He is resolved during the flashes

what to do to the work directory if there is a mesa update


./clean
svn -r 971 update
install
cd ..
cd M0002
./clean
./mk
./rn
One of these day I have to understand the mesa built system.

dbg options

for karo trace output in one T at one call:
n evolve.f, this line turns the trace on:
  h% need_trace_for_karo = .true.

set the flag to .false. to turn off the tracing.

The trace is done for the surface point and looks like this:
                                log10rho   -0.8759418036E+01
                                  log10T    0.3478396909E+01
                               c_minus_o    0.1014171867E-03
                                    alfa    0.0000000000E+00
                            karo opacity    0.9525045170E-03

'alfa' is the fraction of the opacity that came from kap rather than
karo.
also, set dbg to true in subroutine do_karo_get in karo/private/karo.f for full output at every call

Mattson wind trace: Set dbg true in eval_mattsson_wind in star/private/winds.f to get info per step.

create_zahb_model


Falk:
now, remind me what exactly
   create_zahb_model = .true.
does.

Bill:
It just a short cut to avoid the slowness up the RGB.

I've stored away a few models that have gone through the helium core
flash and reached stable He core burning (i.e. zahb models).  The
'create_zahb_model' process takes one of those and removes mass to
match the requested mass.  It seems to work okay down to about 0.55
Msun - beyond that it's cutting into the hydrogen burning region and
runs into trouble.  This is basically the method described in this
paper:

A. Serenelli and A. Weiss, "On constructing horizontal branch models", A&A 442, 1041-1048 (2005).

Opacities

On Jan 4, 2012, at 2:15 PM, Falk Herwig wrote:

> Has karo opacity support disappeared? I made a clean and also
> re-downloaded some dirs, so if karo disappeared only recently I may
> only notice this now:

I had planned to try using the mesa/star other_kap scheme to support
karo since if it doesn't then I should fix it.  Can you give me a day
or two to get that working?

thanks,
Bill

pre-2011:

The Lederer opacities are turned on by
      use_karo_for_kap = .true.

When that flag is set true, the calls that formerly went to the
mesa/kap module go to mesa/karo instead.  The smaller wrapper in
mesa/karo decides whether to pass the call on to the normal mesa/kap
routine or to use Lederer's instead (or a combination of the two in
transition regions).

The data for karo is NOT part of the mesa release.  So only the "in
group" can use karo.  You need to put the directory with his data in
the mesa/karo directory, and name it as he does: 'kR_gn93.Z.01'.

Again, I've tried this a bit, but it's up to you to break it.

To convince yourself that it is really using his opacities, you might
want to edit the routine do_karo_get in karo/private/karo.f.  That's
the place where it decides whether or not to use the standard
opacitites.  It does 'call kap_get' for the standard ones, and 'call
interp_karo' for the ones from Lederer.

--Bill

sugimoto betas


      allow_one_sided_hydro = .false.
These effects how the difference equations are centered. Instabilities, for example during the He-shell flash convection zone can be "stabilised" by combining left and right centered pairs of equations corresponding to 1 and 0 in the P/R and L/T equation pairs. Doing this has noticeable effects on the accuracy of the integration. E.g. main-sequence tracks look different depending on what these values are set to.

parallel OpenMP

export OMP_NUM_THREADS=8

variables for the convection zone boundaries

mx_cv_top, mx_cv_bot are the top and bottom of the largest convective zone; mx2_cv_top and mx2_cv_bot are the next largest. units are fraction of total mass (I think). Convergence problems induced by not finding the conv boundary, e.g. at the envelope bottom.
I've managed to get past the problem spot that killed 19456 by getting
it to give higher priority to good resolution near the bottom edge of
non-burn convective zones. 
 
The additions to the inlist file are:      
     Fconv_below_nonburn = 0.4d0
     Fconv_inside_nonburn = 0.4d0
     Fconv_above_nonburn = 1d0

grid refinement


refine on H-profile: - in star_controls.dek -> inlist
            ! gradient of helium mass fraction
            double precision :: mesh_XHe4_min_for_delta      ! threshold
            double precision :: mesh_delta_XHe4_max ! <= 0 means skip this

initial composition


any alternatives to Anders & Grevesse as initial composition?

The set_z and set_y things are not the only way to change composition;
there are also routines in the API.  The most general is
'star_relax_composition' where you provide a composition profile as a
function of mass.

Also: check in star_lib.f following the comment "here are some
routines for doing special adjustments to the star's composition".

starting models


the create_pms model thing works fine.

I've noticed some unexplained differences between the pre-built zams
models (that are derived from Eggleton's zams models) and the result
of evolving the pre-main-sequence starting model to the zams.  I
haven't spent any time trying to figure that one out however.

I guess you start with some solution to the Lane-Emden equations?

I use n=3/2 polytrope to guess an initial center density to match the
requested initial center temperature.  I use a constant T Ds/Dt value
to generate luminosity and crank that up enough to make things
convective.  Then I do a shooting solution from the center using the
actual mesa/microphysics rather than the Lane-Emden solution.  That
gets iterated until the mass is right.

solar model, mixing length

Unless I know how you have determined I can not continue. If you
have not calibrated this against the sun parameters that's what we
have to do first.

In star/test you'll find "inlist_solar".  If you run that, you'll get
a solar-ish model after 69 steps (or so).  The alpha was tuned (to
2.18) to give correct R and L.  However, as you know, change just
about any of the other parameters, and you'll need to retune alpha.

Tester

What is tester doing?
Each package in mesa has the same layout and the same installation steps. And part of the installation is running a test to see if things look okay. That test is done in the 'test' directory of the package; you'll find a test/make/makefile too. The makefile for test creates a little test program called 'tester'. So each each package, during installation a new 'tester' is created and run. The output from tester is compared (using ndiff) to the expected output (in 'test_output').

logxxx.data output

conv_vel is convection velocity

conv_coeff is diffusion coef in (gm^2/sec)
mixing_type is 
	    0 none
	    1 convection
	    2 overshooting
	    3 semiconvection
	    4 salt finger

network


One other thing that I would definitely want to see in the log.data files is the C13 abundance. Could you add that to the output?

The log file for a particular model has all of the isotopes that are in the current net, so if you aren't seeing C13, you're not using a net that includes it.   

For example, you might like to use this one:

   change_net = .true. ! if true, then change the nuclear reaction network
      new_net_num = 65536 ! falk's net

It's based on the net you use in EVOL.  If you change to this, you should get c13 and the rest in the logs.

         isos(:) = (/
     >      ih1,
     >      ih2,
     >      ihe3,
     >      ihe4,
     >      ili7,
     >      ibe7,
     >      ib8,
     >      ic12,
     >      ic13,
     >      in14,
     >      in15,
     >      io16,
     >      io17,
     >      io18,
     >      if19,
     >      ine20,
     >      ine22,
     >      img24
     >      /)     
         reactions(:) = (/
     >
     >   ! pp chains
     >      irpp,              ! p(p,e+nu)h2                   nacre
     >      irdpg,             ! h2(p,g)he3                    nacre
     >      ir33,              ! he3(he3,2p)he4                nacre 
     >      ir34,              ! he4(he3,g)be7                 nacre
     >      irbe7ec,           ! be7(e-,nu)li7                 nacre
     >      irli7pa,           ! li7(p,a)he4                   nacre
     >      irbe7pg,           ! be7(p,g)b8                    nacre
     >      irb8ep,            ! b8(e+,nu)be8 => 2a
     >      irb8gp,            ! b8(g,p)be7                    nacre
     >      
     >   ! cno cycles (assuming instantaneous beta decays)
     >      irc12_to_c13,      ! c12(p,g)n13(e+nu)c13          nacre
     >      irc13pg,           ! c13(p,g)n14                   nacre
     >      irn14_to_n15,      ! n14(p,g)o15(e+nu)n15          nacre
     >      irn15pa,           ! n15(p,a)c12                   nacre
     >      irn15pg,           ! n15(p,g)o16                   nacre
     >      iro16_to_o17,      ! o16(p,g)f17(e+nu)o17          nacre
     >      iro17pa,           ! o17(p,a)n14                   nacre
     >      iro17_to_o18,      ! o17(p,g)f18(e+nu)o18          nacre
     >      iro18pa,           ! o18(p,a)n15                   nacre 
     >      iro18pg,           ! o18(p,g)f19                   cf88
     >      irf19pa,           ! f19(p,a)o16                   cf88
     >      
     >   ! alpha burning
     >      ir3a,              ! triple alpha                  nacre
     >      irc12ag,           ! c12(a,g)o16                   nacre
     >      irn14ag_to_o18,    ! n14(a,g)f18(e+nu)o18          
     >      iro16ag,           ! o16(a,g)ne20
     >      iro18ag,           ! o18(a,g)ne22                  cf88
     >      irne20ag,          ! ne20(a,g)mg24
     >      
     >   ! ne22 burning
     >      irne22pg_to_mg24   ! ne22(p,g)na23(p,g)mg24        nacre
     >      
     >      /)     


mesh refinement control


Falk:
      mesh_delta_XH1_max = 0.05
      mesh_XH1_min_for_delta = 1.d-5

Does this mean: include mesh zone at i if XH(i)>1e-5 and
XH(i)-XH(i-1)>0.05 where 0.05 is linear in mass fraction (let alone
details of the mesh indices)?

Bill: Yup.  In particular, it doesn't depend on the mass coordinates
of the points.  It just looks at the magnitude of the change from one
point to the next and triggers insertion of a new point if the change
is too large.

--Bill

Order of execution


Sequence of operations during a step of stellar evolution in mesa/star

1) if first_try then  ! don't do this for retries
      prepare_for_new_step
         set_mixing_info -- for use by remesh
         remesh -- note that do not redo remesh in case of retry
         save whatever is needed in case of retry

2) first_half_step
      set_vars
      do_element_diffusion
      if burn_initial_frac > 0
         do_burn_and_mix
            set_mixing_info
            call burn_and_mix module

3) do_hydro_converge

4) second_half_step
      if burn_initial_frac < 1
         do_burn_and_mix
            set_mixing_info
            call burn_and_mix module
         if redo_hydro_after_burn_and_mix then
            do_hydro_reconverge

5) do_winds -- calculate mdot

6) if mdot /= 0 then
      adjust mass -- according to mdot
      if redo_hydro_after_adjust_mass then
         do_hydro_reconverge

7) do_report






current defaults:

      burn_initial_frac = 0 -- all burning done during second_half_step
      redo_hydro_after_burn_and_mix = .true.
      redo_hydro_after_adjust_mass = .true.
   

So you can see that the hydro can be done several times during a step.
This seems to be a good investment in the overall robustness of the code.


hydro needs dlnT/dlnP gradient from MLT for use in dlnT/dm equation.
The value depends on the local structure variables, so the hydro
routines will call MLT to reevaluate the gradient when the structure
variables change.  The hydro routines are not interested in mixing so
they don't care about the diffusion coeff or the convection
velocities.  It simply saves the results for these in vectors called
mlt_cdc and conv_vel.  (It also saves mlt_D, the Eulerian verion of
mlt_cdc).

burn_and_mix is the opposite case: it needs cdc but doesn't care about
the dlnT/dlnP gradient.  It also cares about things like overshooting.
So, before calling burn_and_mix, the evolve routines construct a new
vector of diffusion coeffs, called cdc, starting from mlt_cdc but
modified for things like overshooting.  This is done by a routine
called 'set_mixing_info'.

set_mixing_info starts by copying mlt_cdc to cdc.  Recall that mlt_cdc
comes from the most recent hydro evaluation and cdc will be used by
the mixing.  The first thing set_mixing_info does is remove
singletons.  Any single point radative or convection region is
absorbed into its neighbors.  The second step is to add mixing to the
outer envelope from the surface down through all points with T >=
T_mix_limit.  This prevents envelope mixing from reaching the surface
because of a tiny radiative region outside of the main envelope.  The
final step for set_mixing_info is to add overshooting.  

making ZAMS models


Look in star/starting_models to see what initial Z's are
available.  You'll find zams_z1m4.data and zams_z1m8.data
where the '1m4' means 1e-4 of course.  Either of these will
do; let's use 1e-4 and reduce Z from there.

In your inlist, specify one of the existing zams sets.

         initial_z = 1d-4 ! initial metallicity

Then tell it to change the initial Z to what you want.

      change_initial_Z = .true.
      new_Z = 2d-6

Falk Herwig, Last update: Wed 4 Jan 2012 15:35:16 PST , WWW home