\documentstyle[longtable,a4,11pt]{report}

\setlength{\parskip}{2.5pt}

\newcommand{\expl}[2]{\noindent{\parbox[t]{5.0cm}{\tt #1}}
{\parbox[t]{8.2cm}{#2}}}
\newcommand{\struc}[2]{\noindent{\parbox[t]{3.0cm}{\sf #1}}
{\parbox[t]{9.2cm}{#2}}}

\begin{document}

\title{{\Huge\bf Open Cluster Data Base} \\ {\LARGE Version 2.0} \\
{\LARGE (User's Guide)}}
\author{J.-C. Mermilliod\\
\\
Institut d'Astronomie de l'Universit\'e de Lausanne\\
CH - 1290 Chavannes-des-Bois / Switzerland\\
\\
E-mail: {\tt mermio@scsun.unige.ch}}
\date{\today}
\maketitle

\newpage
\begin{abstract}
The database for stars in galactic open clusters (BDA) has been developed 
since 1987 at the Institute for Astronomy (University of Lausanne). The 
extensive collection of observational data covers most significant domains
and concerns about 100000 stars in some 500 NGC, IC and anonymous clusters. 
This includes measurements in most photometric systems in which cluster 
stars have been observed, spectroscopic observations, astrometric data, 
various kinds of useful information and extensive bibliography. Maps for 
about 180 clusters have been scanned and included in the database.
The greatest effort has been spent in solving the identification problems 
raised by the definition of so many different numbering systems.

The database not only aims at storing data but also at offering a versatile
working environment covering many aspects of the study of open clusters. It 
provides tools to compare the data and plot photometric diagrams. It is most 
suitable for use on a workstation. Independently of the application software 
proposed, the main utility of the database lies in the extensive data 
collection brought into uniform numbering systems. The BDA presents a clear 
report of the present status of observations.

The following document presents the general philosophy of the data 
organisation and retrieval. It explains with many examples how to use the 
database. 
\end{abstract}

\pagenumbering{roman}
\tableofcontents

\part{DATABASE DESCRIPTION}
\thispagestyle{empty}
\pagenumbering{arabic}

\chapter{Introduction}

\section{History}
About 1200 galactic open clusters are known and approximately half of 
them have been observed so far, in at least one photometric system. The
number of stars per cluster goes from a few tens for the poorest objects,
to several thousands for the most prominent clusters.

Modern observations of open clusters developed very rapidly after the 
definition of the UBV photoelectric system (Johnson \& Morgan 1953). 
These observations produced a number of colour-magnitude (Hertzsprung-Russell) 
diagrams which made fundamental contributions to the understanding of stellar 
evolution. At the same time photographic photometry allowed to observe larger
areas and reach fainter stars. 
Additional information, mostly spectroscopic, was gradually obtained, 
first for stars in the nearby clusters and later in more distant clusters, 
thanks to the existence of larger telescopes and more efficient detectors. 
Two-dimensional detectors are best adapted for the observations of star 
clusters and CCD observing is today becoming the preferred technique. It 
replaces both the photoelectric and photographic photometry. 

Data compilations started already in 1972 at the Institute for Astronomy
(University of Lausanne, Switzerland). Mermilliod (1976a) published a first 
catalogue of UBV photometry and MK spectral types in open clusters. The 
third version was announced ten years later (Mermilliod 1986a). The systematic determination of cross-references between the many numbering systems 
in a cluster was the basic work which made the realisation of the data 
collections possible. Several catalogues were distributed by the Strasbourg 
Data Center (CDS). The files remained on magnetic tapes until the installation 
of Unix workstations and large disks in our institute made it possible to keep 
the data on-line. These compilations were discontinued in their older form and 
the data were organised in a database designed in March 1987 (Mermilliod 
1988a, 1988b).

\section{Need for a database}
The quantity and variety of observations accumulated on about 100000 stars 
in open clusters are quite impressive and sufficient to motivate the 
development of a specialised database. Data analysis in open clusters 
is quite complex and extensive information are often necessary, but are 
difficult to gather because of the multiple star designations created so 
far. The star identification problem, already discussed (Mermilliod 1972, 
1973, 1976b, 1979a) is not specific to star clusters, but takes on a more 
accute form. The variety of stars identifications asks for an unavoidable 
amount of extra work to collect the data available for any star. The necessity 
to keep on-line track of the cross-references between various numbering 
systems may be one of the motivations for developing a database.

The database offers one solution to this problem and a coherent
data storage and retrieval.
The fact that data are kept under a stable format allows to develop tools
more easily. It is also a deposit for unpublished data, although the
development of anonymous ftp servers has reduced the interest
of the database to preserve unpublished data.

\section{Access mode}
The database is maintained on a Sun Sparc worksation and its total size is 
about 35 MB for a total of about 6000 files. Due to the specific structure 
of the database, it is presently more convenient to have a copy of it on a 
workstation.The dissemination of various copies may raise the problem of the 
simultaneous existence of several revisions. However, this problem is solved 
by the transfer of the modified or new data files grouped in a tarfile over 
the Internet. Such a system is already used to maintain a copy of the database 
at the Meudon observatory in Paris on a DEC workstation, which offers 
additional possibilities for distribution to other machines. Access through 
the World Wide Web is being examined. Hypertext description is being written 
and some database consultation will be provided. There is also a project to 
access the database through the ADS facility. 

\section{User's guide overview}
{\bf The first part} presents a database overview and describes the 
concept of datatype and the organisation of the bibliography (Chapter 2),
the database structure in directories and the file organisation (Chapter 3),
the different query modes that have been considered for BDA (Chapter 4) 
and the graphics facilities (Chapter 5).

Chapter 6 discusses the detail of the database installation from tape 
and the various definition files. 

{\bf The second part} is divided in four chapters which explain how to use 
the basic commands and the various query modes: command, menu, prompt
and graphical user interface.

{\bf The third part} presents applications provided by the database and
discusses the underlying philosophy. It also explains how to use them.
The use of the graphic program which generates the photometric diagrams
is explained here.

{\bf The Appendix} presents a summary of the command and program names, 
option meaning, data types, file names, field names and aliases. It also 
describes the catalogue of red giants in open clusters.

\chapter{Database Overview}
\section{Basic concepts}
\subsection{Database concept}
The database structure results from the history of data compilation, the
constraints of existing application software and mostly reflects the
way the research using the database is viewed.

The database has been built not only to query information for one or a few 
stars, but mainly to study, in a general sense, open clusters. This includes 
data comparaisons, data elaboration as well as evaluation of cluster 
properties. The latter analysis relies basically on membership determination.

Therefore clusters, rather than stars, form the basic unit of the database,
and each cluster has its own directory and contains the relevant data.
The various logical steps of the cluster study and the necessity of starting
from the original data justify the adopted structure.

\subsection{Data types}
One important concept in BDA is that of datatype. A datatype designation has 
been attributed to each kind of data. This designation is generally similar 
to acronyms currently used, like, for example: {\bf ubv, mk, uvby, jhk},
and so on. The datatype designation is described in Appendix C.

The knowledge of the datatype is important because it is always used for
querying the database. The datatype may be an argument to commands, like
in {\tt list ubv}, but more generally, many command names are identical
to the datatype designation. For example, the command to get the UBV data 
for some stars, let's say star \# 1 2 3, will be simply: {\tt ubv 1 2 3}.

This syntax and the fact that the data are collected in distinct files
hide the name of the file that contains the desired data and the field names 
in each record.

\subsection{The basic scheme}
The analysis of photometric diagrams, and especially the colour-magnitude 
diagrams that can be built in several photometric systems, remains the key 
method to determine the cluster parameters (reddening, distance and age).
The study of a cluster necessitates to have also easily available all
complementary information, remarks and bibliography. This is why everything 
is organised in the cluster directories.

If the production of a colour-magnitude diagram seems rather easy, the real 
situation is unfortunately not simple and the data are not always of 
sufficient quality. Therefore a large amount of work is necessary to plot 
a reliable colour-magnitude diagram. The various steps involved are:

\begin{enumerate}
\item the census of the stars in the cluster field, to determine the completeness in terms of both limiting magnitude and surface coverage; 
\item the comparison of the various sources of data; 
\item the selection of cluster members. This is certainly the main problem
and no straightforward solution has yet been found.
The other data contained in the database, like spectral types, radial
velocities, remarks, and so on, may help in assigning cluster membership;
\item with the best selected data, it is eventually possible to plot a nice 
colour-magnitude diagram and determine more accurate cluster distances and ages.
\end{enumerate}

The database cross-reference tables and the collection of rectangular 
positions are useful to achieve the first step. Tools to perform the 
second step and compute mean values are also provided. 

\subsection{Star numbering}
Star numbering is probably the most serious problem to solve to build a
coherent database. The solution adopted is based on earlier work which 
was undertaken to include photometric data on star clusters into 
catalogues prepared for the Strasbourg Data Center (Mermilliod 1972, 1973, 
1976b, 1979a).

One numbering system has been adopted for each cluster and it provides
a unique identificator for each star which is used to register the various 
data. Therefore the data recorded under the numer \#1 in each data file 
always belong to the same star, i.e. star \#1. This identificator 
is the main key that will be used to access the data. This policy 
implies that star original numbers found in publications have often 
to be transformed into the system adopted in the database. Complete
cross-references have been determined, either by comparing cluster charts
or by using rectangular (x,y) positions and carefully kept in tables, one 
per cluster. These tables give, for each star, the different identification 
numbers that are existing in the various numbering systems. The first
column of the cross-reference table defines the adopted numbering system. 
The second column of the table contains the numbers from the basic source 
and the numbers adopted in BDA are therefore identical to it. But no numbering 
system contains all the stars present in the cluster field and the basic 
numbering systems have been extended.

Two solutions have been adopted to record additional stars. The first one
consists in continuing the numering system with the new stars of the
second reference, entered in increasing number order, then with the third 
system and so on. This works fine if the additional stars are not too numerous.
The second solution is used when one numbering system contains many new
stars, but it is not desirable to change the reference system. This 
happens, for example,  when proper motion studies contain lots of stars 
outside the main cluster area. In these cases, a constant is added to 
the star numbers. Adding 1000, makes it easy to read back the 
original numbers.

\subsection{Star membership}
The membership of stars to open clusters is certainly the most difficult
problem to solve as a preliminary to any study. However there are very
few methods to used when proper motions probabilities are not available.
Existing codes developed at several observatories are not yet publicly 
available.

An expert system, adapted from Jonathan (Frot 1988), has been 
implemented in the database with a number of rules that should help the 
user to determine if a given star is a cluster member or not. Rules have to 
be improved to resolve the ambiguities resulting from the curvature of the 
sequences in some photometric diagrams and to extend the number of data taken 
into consideration. Presently, the expert system looks in the database for 
the UBV data, spectral types, proper motion membership probability and radial
velocity, and distance from the cluster center and makes inference on the 
membership. It does the work of extracting the data itself and proposes a 
decision according to the data it has found and the rules it knows to analyse 
the situation. This behaviour reproduces the way the same question is solved 
by a human user. It could of course use more information than it does now, 
speak English instead of French, the language in which it has been developed, 
and above all, have better rules. It would however be extremely useful to be 
able to sort out the member stars more or less automatically, on the basis 
of objective criteria.

\section{Directory organisation}
The database is not a traditional relational database management system, 
but rather an advanced file management system. Clusters, but not stars, 
form the basic unit of the database and the structure has been designed 
to provide a natural working environment. The whole data set for a cluster 
forms a special relational set, because one key, the star designation, 
is common to all files.

The database structure uses the directory hierarchy supported by the Unix 
system. The main directory is the database itself. It contains several 
sub-directories: description of the database, help information, references, 
bibliography, programs, shell scripts. The clusters are collected in parent 
directories according to the source catalogues (NGC, IC or anon). Each cluster 
defines an independent directory identified by its name and containing the 
available data in distinct files, one for each data type. This structure 
allows easy inclusion of any new data type.

The present database structure is thought of as a first step in the organisation
of cluster data and bibliography. When enough data analysis has been performed 
and only one set of data for each type will be available for each star, it may
perhaps be more convenient to adopt another structure and collect all the data 
in one file for each cluster.

\section{File organisation}
The files are organised sequentially and compressed and, within the files, 
the entries are sorted by star number and source reference. Due to the 
small size of many files, there is no need for indexing or direct access. 

The filenames reflect their content and are conserved throughout the
data base. They are presented in Appendix D. The command names are usually 
identical to that of the data type they handle.

This file organisation offers several advantages: 
it allows the full use of UNIX file handling tools for 
searching and sorting and has enabled the straightforward 
implementation of graphics facilities. Compressing the files with the 
system function {\tt compress} produces a reduction of disk storage by a 
factor of two. Many tasks can be performed without physically 
decompressing the files, by using the system function {\tt zcat} to feed 
a pipe; it produces an output but leaves the files compressed. Finally, 
Unix editors are used to maintain the files, saving software development.

\section{Record structure}
Whenever possible, the records have the same structure: each record 
contains the star identification, the source and the data. 
White space are used as separators and no padding is used for 
missing data. The fields of each datatype are listed in Appendix E.
The star identification is the main key to access the data, but it is also 
possible to use filters based on the data sources or astrophysical
parameters. 

Most data files are 1 to n relations because several sources of data
may exist for each star. Files with mean or selected data are 1 to 1
relations. The first aim of the database and of the data analysis facilities
developed is to pass from the 1 to n state to the 1 to 1 state. This means 
that at the end of the analysis process, only one value of each parameter
will remain for each star.

\section{Data sources}
The database started with the installation of the data already collected
and kept on magnetic tapes. Several catalogues had been announced and
made available through the Strasbourg Data Center: UBV photometry and MK 
spectral types in open clusters (Mermilliod 1976a, 1986a), UBV 
photographic data (Mermilliod 1984a), individual radial velocities 
(Mermillliod 1979b, 1984b), cross-reference tables (Mermilliod 1979a), 
cross-identifications with astronomical catalogues (Mermilliod 1986b). 
Unpublished compilations made by me and containing CCD data, rotational 
velocities, membership probabilities and positions were used at the first 
installation of the database.

Aditional photoelectric photometric data were taken from the compilations 
made in our institute, and among them data in the following systems:
uvby (Hauck \& Mermilliod 1985), DDO (Mermilliod \& Nitschelm 1989), 
Walraven (Nitschelm \& Mermilliod 1990).

The files remained on magnetic tapes until the installation of Unix 
workstations and large disks in our institute made it possible to keep the 
data on-line. The data were organised in a database designed in March 1987 
(Mermilliod 1988a, 1988b) and the compilations of cluster 
data were discontinued in their older form. New published data are entered 
regularly. A progress report on the introduction of recent data has been
published in the CDS Bulletin (Mermilliod 1992a).


The bibliography from Alter et al. (1970) and the information from 
Lyng{\aa}'s (1987) catalogue were taken from the files distributed by 
the Strasbourg Data Center.

\section{Data content}
The database tries to collect all published data for stars in open clusters 
that may be useful either to determine the star membership, or to study 
the stellar content and properties of the cluster. The data are usually 
recorded in their original form, with an indication of the source, but also 
as averaged values or selected data when relevant. The mean values for UBV 
(photoelectric, photographic or CCD) are not kept in the database, but can
readily be computed. It presently contains:

\begin{itemize} 
\item {\bf astrometric} data: coordinates, rectangular positions, and some proper motions;
\item {\bf photometric} data in most systems in which cluster stars have been
observed; 
\item {\bf spectroscopic} data: spectral classification, radial- and rotational 
velocities; 
\item {\bf bibliographic} information;
\item {\bf specific} information:
(a) membership probabilities, 
(b) orbital elements of spectroscopic binaries, 
(c) remarks on peculiarity, variability, duplicity, 
(d) identifications of double star components, 
(e) cross-identifications with astronomical catalogues,
(f) list of red giants in the cluster field and of non-member stars.
\end{itemize}


Table 2.1 gives an insight into the present content of the database; it 
lists for each kind of data the number of clusters involved, the number 
of measurements, and the number of stars concerned, except for the 
references and bibliography where the number of entries is indicated. 
The completeness of the data base should be rather high for many types 
of data, and is still improving.

\setlongtables

\begin{longtable}[p]{lrrr}
\caption{July 1994 database content} \\
\hline
  & \multicolumn{3}{c}{Number of} \\
  Subjects   & Clusters & Meas. & Stars \\
\hline
\endfirsthead
\hline
    & \multicolumn{3}{c}{Number of} \\
 Subjects  & Clusters & Meas. & Stars \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 Identifications    &  365  &          &    9876  \\
 Transit Table      &  190  &          &   68719  \\
 Coordinates        &  441  &   30621  &   27879  \\
 Positions          &  441  &          &   39424  \\
 Positions (x,y)    &  385  &          &  122819  \\
 Double stars       &  193  &    1662  &    1230  \\
                    &       &          &          \\
 UBV photoelectric  &  413  &   30341  &   20632  \\
 UBV photographic   &  268  &   91458  &   71899  \\
 UBV CCD            &   69  &   30409  &   28322  \\
 UBV camera         &    3  &     194  &     194  \\
 UBV sit            &    3  &     284  &     284  \\
 UBV cmd            &    5  &          &    2867  \\
 UBV hrd            &    7  &          &     321  \\
 RGU (pg)           &   74  &   10191  &   10191  \\
 Geneva 7-colors    &  184  &          &    4306  \\
 uvby measures      &  155  &    5461  &    3897  \\
 uvby mean          &  155  &          &    3423  \\
 uvby Eggen         &   42  &     876  &     775  \\
 uvby CCD:          &    8  &     935  &     935  \\
 H$\beta$ measures  &  216  &    5696  &    3821  \\
 H$\beta$ mean      &  216  &          &    3312  \\
 Walraven           &   58  &    1432  &    1382  \\
 Vilnius            &   31  &     747  &     698  \\
 DDO                &  126  &     950  &     769  \\
 Washington         &   60  &     533  &     507  \\
 RI (Johnson)       &    2  &     405  &     350  \\
 RI (Kron)          &    4  &    1254  &    1111  \\
 RI (Eggen)         &   14  &     118  &     118  \\
 RI (Cousins)       &   23  &     869  &     847  \\
 RI (Cousins) CCD   &    8  &    3149  &    3112  \\
 JHK                &   33  &     776  &     774  \\
 uvgr (Thuan, Gunn) &    4  &     144  &     144  \\
 Smith              &    5  &     103  &     103  \\
                    &       &          &          \\
 MK types           &  272  &    7754  &    4645  \\
 MK types (selected) &  162  &          &    3974  \\
 HD types           &  299  &          &    8584  \\
 Vsini              &   79  &    2222  &    1592  \\
 RV mean            &   63  &    1799  &    1612  \\
 RV individual      &  183  &   29712  &    3450  \\
 RV GPO             &   10  &     568  &     568  \\
 RV RFS             &    7  &          &     141  \\
 Orbits             &   35  &     199  &     175  \\
                    &       &          &          \\
 Proper motion (abs) &    1  &     2331 &    1164  \\
 Proper motion (rel) &    4  &     3811 &    3811  \\
                     &       &          &          \\
 Probability ($\mu$) &   59  &    21574 &   21574  \\
 Probability (RV)   &    2  &          &      87  \\
 Remarks            &  234  &     3703 &    3146  \\
 Periods            &    3  &       67 &      62  \\
 gK stars           &  231  &          &    3291  \\
 Am stars           &   36  &          &     109  \\
 NM                 &   94  &          &    2645  \\
                    &       &          &          \\
 Cluster maps       &  177  &          &          \\
 Bibliography (Alter)  & 1200  &          &        \\
 Bibliography (69-94)  &     &          &    3000  \\
 References         &       &          &     2500  \\
\hline
\end{longtable}

A part from the published data found in the literature, the database contains 
also information that was accumulated in the past or prepared especially 
for it to enhance its capabilities:

\begin{itemize}
\item Cross-reference tables contain the basic cross-identifications between
the various numbering systems in a cluster. A large part of the work has
been done by the author. 
\item Rectangular (x, y) positions (usually in arbitrary units) have been 
collected from the literature or measured on published photographs with a 
digitizing tablet. A file collects the information relevant to the 
scale of the (x, y) positions and the sources of the data.
\item Published photographs defining cluster numbering systems have been 
scanned and included in the database. About 180 maps have already been 
processed.
\end{itemize}


\section{Access to the information}
Thanks to the work on stellar identifications invested in the 
preparation of the catalogues used for building the database, the 
identification problem is greatly simplified; the various data for 
each star are collected under a unique designation, according to the 
numbering system adopted for the cluster. This identification is the main 
key for accessing the data of any star in any file.
However, there are other possibilities for querying the database:

\begin{itemize}
\item One can recover the star number in the adopted numbering system 
      (main key) starting with any identification existing in the 
      cross-reference tables;
\item One can access the data directly with HD, DM or other 
      identifications;
\item Data can be selected according to a source number;
\item Samples can be formed according to astrophysical criteria.
\end{itemize}

Interesting clusters may be selected according to the number of 
available data of any type and the implementation of Lyng{\aa}'s (1987) 
catalogue allows other types of selection, based on galactic position 
or cluster age for example.

\section{Bibliography and references}

The access to general or bibliographic information is considered as a very 
important facility that the database should offer. The catalogues already
available in computer-readable form have been installed in the database and
an additional bibliographic service covering the recent literature has been
developed in another style to make the retrieval easier and more efficient. 
The various facets are provided by: 

\begin{itemize}
\item the compilation of cluster parameters by Lyng{\aa} (1987), which 
provides the global information available on open clusters; 
\item the bibliography compiled by Alter et al. (1970) and its first 
supplement (Ruprecht et al. 1981), which collect the references from 
about 1900 or before up to 1973. 
\item the recent bibliography, covering the years 1969 to the present day, 
which has been developed for the database. This bibliography is based on 
chapter 153 of the Astronomy and Astrophysics Abstracts and the recent 
references are regularly entered in the computer. The bibliography search 
is based on keywords. The most obvious one is simply the cluster name, but 
many others can be used. Abstracts are not yet included in the database.
\item the information on ongoing work, generally extracted from observatory
reports and AAS abstracts is also available from the database.
\end{itemize}

The source of most data are indicated and the references can 
be obtained either one by one for any type of data or for any sample.


\section{On-line help and information}
On-line help and information are available at several levels. 
General descriptions of the database and applications are proposed 
through various menus. One can easily recover the names of the 
commands, the content of the files or the descriptions of the options. 
In addition, each command and function is (or will be) described in a 
manual file. A rapid help on the command syntax to use to obtain the 
desired results is available with an on-line menu which offers also many 
examples. Similar command description is also available in hypertext form 
and can be displayed with NCSA Mosaic. 

\section{Facilities offered by the database}

The database offers several working facilities and some are described below.
Three examples are discussed by Mermilliod (1992b).

\begin{itemize}

\item {\bf Data comparaison:}
Data comparaison is an important step before starting any study. Tools have 
been developed to compare data coming from different sources. This concerns
essentially the UBV system because the UBV data represents a large fraction 
of the published photometric data. The other photometric systems seldom 
present two or more different sources of data.

\item {\bf Colour-magnitude diagram manipulation:}
The main tool of the database allows to plot the various diagrams that can be 
built in the UBV and Geneva photometric systems. It offers a large spectrum 
of facilities, like fitting sequences or computing isochrones. Extensions 
have to be developed for other photometric systems (uvby, Walraven and others). 

\item {\bf Technical data:}
The technical information on instruments and data acquisition systems may 
also prove important to characterise the data contained in the database.
Pioneering work has been done by van Leeuwen (1985) who collected the 
information on telescopes and plates related to proper motion studies
in open clusters. This information is available in the database. The 
collection of similar information on telescopes and spectrographs used for 
radial velocity determination has been started. It would also be necessary 
to collect the same information for CCD photometry.

\item {\bf Miscellanous facilities:}
To avoid remembering many commands and their options, a number of menus
have been written which group most actions concerning a given subject.
Those built up so far are generally related to the maintenance and development 
of the database, and were especially designed to facilitate the introduction 
of new data. In this category, one finds:

\begin{enumerate}
\item the management of {\it coordinates:} it was made to determine 
coordinates of stars in open clusters starting from (x,y) positions in 
arbitrary units and include them in the database;
\item the management of {\it rectangular position:} (x,y) positions are 
now often published with CCD data. In addition many published charts have 
been measured and the results included in the database;
\item the management of {\it cross-references:} to maintain the principles 
of the database, it is necessary to determine the cross-references between 
any new numbering system and those already existing. 
\end{enumerate} 
\end{itemize}

\section{Graphics}
The SM (2.3.1) graphics package written by Robert 
Lupton and Patricia Monger is extensively used to produce graphical 
output and hardcopy. It is often used in its interpreter mode for two 
reasons: firstly, macros take less disk space than compiled Fortran or C
programs and are much easier to write, and secondly, the interaction with 
the user is extremely efficient; values of various parameters (plot size, 
title, symbols) can be changed interactively and the final plot printed.
The dialogs are driven by menus.

\section{Correction and updating}
A program proposing pull down menus drives the updating 
 processes through a number of shell scripts. Correction and updating 
is done either by editing the files when only a few data are to be 
added or corrected, or by shell handling file commands when more data 
are to be added.

\section{User's profile}
Even if the proposed software is not used, the utility of the database lies 
in the extensive collections of data brought into a uniform numbering system. 
The database can therefore be useful not only to astronomers working in the 
field of open star clusters, but also to students for a variety of work, 
because of the easy access to the data and facility for implementing users' programs.

Copies of the database have been sent to several colleagues in various countries. Other colleagues and several students working on their Ph.D. have also 
asked for specific data or bibliographic information by E-mail. 

\section{Scientific use}
The realisation of a new atlas of colour-magnitude diagrams based on improved 
and homogeneous cluster parameters was the first scientific use of the 
database foreseen and the motivation of its development. The way to achieve 
this goal is however quite long, but it remains a major project. When reliable
parameters have been determined for a number of clusters, it becomes possible 
to do some astrophysical research. 

The information collected in the database may be used to study 
the clusters' stellar content and properties. In spite of the 
limitations due to the lower precision of some data, or their limited 
number, the database is the best starting point for many astrophysical 
studies involving open clusters. Nowhere are complete data collections to
be found and one merit of the database is to give a clear report of the 
present observation status.

The database facilities have been used by Meynet et al. (1993) to 
compare new theoretical isochrones with the colour-magnitude diagrams of 
30 clusters. In the spring of 1994, the database was successfully used to 
cross-identify the stars detected by the Tycho experiment on board the 
Hipparcos satellite.

\section{Future plans}
The challenge for the future lies in the management of the rapidly growing 
number of new data coming from CCD photometry and extensive observations of 
faint stars in nearby clusters and covering a wide range of groundbased and 
space techniques. 

Due to the increasing quantity of available data, any study will become more 
and more time-consuming. One has therefore to think to more automated methods 
that rely not only on extensive data collections, but also on the transfer of 
knowledge to the computer, to let it do much of the work. Therefore, aside 
from continuing to collect and install new data and take other data types 
into consideration, the main development should consist of implementing
more analysis facilities and calibrations. 

Another important point would be to include in the database the knowledge 
contained not only in the data or procedures themselves, but in the article 
and review texts. Some could of course be recovered by searching the 
bibliography, but it may take a long time because the number of papers 
written every year on open clusters is increasing regularly and reaches the 
level of 200 papers for the best years. Therefore new results or exciting 
hypothesis should be extracted from the papers and stored in the database. 
This is actually possible thanks to the hypertext facility offered by NCSA 
Mosaic or the WAIS text indexing and search facilities.


\chapter{Database structure}
The numerous files are located in a directory hierarchy based on a two or 
three level structure. The root directory is {\sf bda}, the database itself 
and the data, programs, bibliography and documentation are contained in 
subdirectories. The following describes the database organisation and the 
content of the numerous directories

\subsection{Clusters}
Each cluster forms a directory identified by its name (like: 2287, 2516 for 
NGC clusters, 2391, 2602 for IC clusters or cr228, tr16 for anonymous clusters).
The clusters are members of parent directories named {\sf ngc, ic}, and
{\sf anon} depending on the catalogue source. Anon clusters should represent 
a two level structure author's name / cluster number, but this would have 
destroyed the general symetry and be a source of difficulty.

\noindent Therefore the path to the clusters NGC 2287, IC 2391 and anon tr16
are respectively:

\begin{itemize}
\item \struc{bda/ngc/2287}{~}
\item \struc{bda/ic/2391}{~}
\item \struc{bda/anon/cr228}{~}
\end{itemize}

\noindent Each cluster directory contains a working subdirectory, {\sf .T/}, 
which is used to copy data or write temporary files.

\subsection{Programs}
Different programming languages are used to build the database software.
The programs and shell scripts are grouped in directories according to
their characteristics. The path to the executable programs and shells
contained in these directories are set by the command {\tt source .bdarc}
which should be normally included in the alias {\tt bda}.

\begin{itemize}
\item \struc{bda/bin}{groups shell commands, options and functions;}
\item \struc{bda/progf}{contains the Fortran sources and executables, 
corresponding mostly to application software;}
\item \struc{bda/progc}{contains the codes written in the C language,
mostly system programs using Unix libraries;}
\item \struc{bda/progs}{groups the graphic programs that use the SM C library;}
\item \struc{bda/progx}{groups the elements related to the development of the
graphical interface;}
\item \struc{bda/graphic}{contains the many routines of the program developed to
plot photometric diagrams. This is a temporary implementation to
facilitate the development;}
\item \struc{bda/smacro}{contains the SM default file (user's macros) and its 
two variants (one for each language) {\it default\_fr} and {\it default\_eng};}
\item \struc{bda/src}{has a small numbers of sources of external routines;}
\item \struc{bda/lib}{will contain libraries of BDA routines.}
\end{itemize}

\subsection{Description and documentation}
The documentation of the database, and on-line help of the various query
modes are localized in several directories. The distribution is not
really exclusive and will be improved when the query programs will be
more definite.

\begin{itemize}
\item \struc{bda/introduction}{contains the user's guide and installation 
pages;}
\item \struc{bda/descri}{contains documentation files used by the {\tt dsc} 
and {\tt doc} menus;}
\item \struc{bda/html}{contains the hypertext version of the user's guide 
and the command description for use with NCSA Mosaic;}
\item \struc{bda/emploi}{contains the command description for line mode;}
\item \struc{bda/man}{contains man pages for BDA commands and menus which can
be displayed with the Unix command {\tt man}. This directory is subdivided 
in two subdirectories: {\sf man1} and {\sf man3};}
\item \struc{bda/manuel}{contains the French manual files corresponding to 
the command {\tt mnl};}
\item \struc{bda/help}{contains help information for various programs;}
\item \struc{bda/xhelp}{contains the text of the on-line help provided
with graphical user interface}
\item \struc{bda/dictionnaire}{contains files that describes the record
structure of the data file;}
\item \struc{bda/information}{contains files with technical information on
the data;}
\item \struc{bda/contenu}{contains the files collecting the database content
summary for each datatype.}
\end{itemize}

\subsection{Bibliography and references}
The files containing the bibliographic references to the literature 
published from 1969 to the present day are located in the directory 
{\sf bda/bibliographie}. There is one file for the bibliography and one
file for the keyword for each year.

The files containing the references of the data
sources are located in the directory {\sf bda/references}. There is
one reference file for each datatype.

The files with the Alter et al. bibliography (1900 to 1973) and those
from the catalogue of Lyng{\aa} (1987) are normally distributed in the
cluster directories. The files for the clusters which do not have yet their
own directory are collected in the following directories
{\sf bda/catalogue/budapest} and {\sf bda/catalogue/lynga} respectively.

Finally, the original UBV photographic data from Moffat and Vogt (1972)
are kept in {\sf bda/catalogue/bochum}.

\subsection{Miscellanous}
There are more directories depending on {\sf bda}.

\begin{itemize}
\item \struc{bda/hipad}{contains the (x,y) positions measured on a digitizing 
tablet and awaiting their installation in the database;}
\item \struc{bda/newdata}{new data generally received by E-mail and awaiting
their inclusion in the database;}
\item \struc{bda/resultat}{results from data comparisons, and other files;}
\item \struc{bda/format}{templates for datafile format;}
\item \struc{bda/entete}{column headers for each datatype;}
\item \struc{bda/sequence}{reference sequences (ZAMS) for photometric diagrams,
and evolutionary models for computing isochrones;}
\item \struc{bda/isochrone}{isochrones computed on the fly. This directory
should be cleaned regularly;}
\item \struc{bda/modeles}{data on stellar inertia to compute binary system
evolution}
\item \struc{bda/regles}{rules for the expert systems;}
\item \struc{bda/tmp}{temporary files used by commands and programs;}
\item \struc{bda/gestion}{miscellanous files, related to the distribution
of BDA and especially the file named {\it journal} which collects the
names of the files modified or added and the date of the change;}
\item \struc{bda/sauve}{backup copies of data or program files made before an
important change;}
\item \struc{bda/oldata}{cluster data, saved temporarily before doing a
major change, like a numbering system change;}
\item \struc{bda/menus}{French and English texts of the various menus;}
\item \struc{bda/poste}{a place to put files to be sent by E-mail;}
\item \struc{bda/tarfile}{the last tarfiles to update distributed copies of 
the database;}
\item \struc{bda/labo}{a working directory.}
\end{itemize}

\chapter{Query modes}

\section{Introduction}
It is not so easy to design a query method that remains simple and avoids 
the use of an extended idiom and complex syntax. The idea was to avoid
something like SQL because it is not be necessary to know the file- and
field names to use the database. Therefore the query modes have been 
designed to hide as much as possible the database details.

In BDA four different approaches have been tried in an effort to minimize 
the learning and keep the typing at a low level. These four modes can
be schematized as:

\begin{enumerate}
\item The command mode
\item The menu mode
\item The prompt mode
\item The graphical interface mode
\end{enumerate}

\section{The command mode}
\subsection{General principles}
The historically first query mode developed is the command mode. Commands 
are issued at the Unix shell prompt and the user can benefit of the shell
facilities, including command history, command editing and pipe or
output redirection. Unix-style commands with several options have 
been written. These options provide most methods needed to extract 
information from the database, compare data and plot diagrams. For sake 
of simplicity, the commands that work on a specific datatype have the 
same name as the datatype designation. For example the command {\tt ubv} 
deals with the UBV pe data and the {\tt mk} command, with the MK spectral 
types. Command names are listed in Appendix A and option meanings, in 
Appendix B. The file names and content are described in Appendix D, while 
the various fields of each data file are described in Appendix E.

The file organisation and their small size (a few hundred records), 
make the consultation very rapid with UNIX tools: the {\tt grep} family 
and {\tt awk}. The programming of the Bourne shell, together with specific 
Fortran or C language codes, offers interesting possibilities for coding 
elegant and efficient functions and commands. 
 
Commands working on data are organised in the same way: 

\begin{itemize}
\item Without parameter, the command provides a listing of the entire 
corresponding file, page by page;
\item Given one or several star numbers as argument, the command returns 
the desired data;
\item With a keyword, an option and one or several parameters, the 
command performs a selection according to the specification given, which may 
be a limiting value refering to the data or a reference;
\item Options performing specific tasks have also been developed.
\end{itemize}

Commands relating to general information (manual, references, 
bibliography, help, statistics, and so on) can be called from anywhere 
in the database. However, most commands relating to data retrieval are 
active in a cluster directory only. Consequently, one first has to 
place oneself in a cluster directory, which is simply reached by giving 
its path: {\tt ngc 2287}. Owing to the fact that the file names are 
conserved from cluster to cluster there is no need to tell a command 
which file it applies to: it knows it automatically. Because there is
no a priori necessity to display a part of a record only instead of the
whole line, it is not useful to know the detailed structure of the
record for each datatype. A simple function called {\tt nof} for "name of
file" is used to get the corresponding filename. Try {\tt nof ubv} just
to see.

\subsection{Command and function shell scripts}
The command lines may have three possible structures:

\begin{enumerate}
\item {\tt command star-number-list}: extract information for a number of 
stars. The datatype is implied by the command name, as for example:
{\tt ric 5 21 37}.
\item {\tt command option values}: make a specific action driven by the option chosen. The values may be a list of stars or cutoff parameters, as for example:
{\tt ubv -a 1 2 3} or {\tt mk -d 5}.
\item {\tt command parameter comparator values}: perform a selection according
to the chosen parameter and limits given by values, as for example:
{\tt ubv b-v -gt 1.5} or {\tt mk ref -eq 999}.
\end{enumerate}

\noindent Five  comparators are used for making selections: 

\begin{itemize}
\item "greater than" (-gt), 
\item "less than" (-lt), 
\item in an interval (-i),
\item "equal to" (-eq),
\item "nonequal to" (-ne).
\end{itemize}

The basic principle of the command software is the following:

\begin{enumerate}
\item the command parses the command line and calls the appropriate option, 
\item the option function collects the necessary elements (column headers,
data file, calls a awk function and displays the output,
\item the awk function does the real selection work.
\end{enumerate}

The options have been grouped in a small number of scripts. Their names 
begin with opt\_ and end with the options they contain, as for 
example {\tt opt\_pgine}. The names of older functions are formed simply 
by concatenation of "op" and the option name, as for example {\tt op-d}.
These options often call functions written in the awk language. With this
structure and task distribution, any new option added to the system is 
immediately available for all commands without further modifications.

Figure~\ref{ubv} gives the listing of the {\tt ubv} command which handles 
the UBV data through some 15 options. It has been chosen as an example
because nearly all commands working on photometric data are simply
links to the {\tt ubv} command. This script first places the command 
name in the variable TY and writes it in a temporary file, and sets the 
variable FS to the name of the corresponding data file given by the 
function {\tt nof} (name of file). Then it tests the number of arguments on 
the command line. If this number is equal to zero it displays the 
headers of the columns and lists the whole file. If there are one or more
arguments on the command line, the script first tests if the first
argument is numerical. If it is, it is assumed that the arguments are
stars numbers and it calls the function {\tt opt\_noet} and passes to it
the source filename, the datatype and all arguments.

If the first argument is not numerical, the script tests if it is a
known option, and calls the respective function if the answer is positive.
Finally, the command line may contain parameters for a selection and the 
option is then the second argument, which is tested. The position of the 
field used to do the selection is searched in the {\it pos.dic} file located
in the directory {\sf bda/dictionnaire} and is then passed to the
function {\tt opt\_pgine}.

\begin{figure}
\caption[]{Example of a command: ubv}
\label{ubv}
\begin{verbatim}
TY=`basename $0`           
FS=`nof $TY`                 
typ $TY                      
                           
case $# in                   

0) (cat $ENT/$TY.ent ; zex $FS) | page ;;           

*) if numeric $1
   then
     opt_noet $FS $TY $*
   else 

   case $1 in
     -a) op-a $FS $TY $2 ;;
     -y) op-y $FS $TY ;;
   -plt) diag $TY ;;
  -r|-u|-f|-sort) opt_star $1 $FS $TY $2 $3 ;;
  -d|-h|-t|-dr|-nb) opt_drnbt $1 $FS $TY ${2-1} ;;
   esac

   case $2 in
     -gt|-lt|-i|-eq|-ne)  
          pos=`grep $TY  $DIC/pos.dic | \
             awk ' $1 == cle {print $2,$3} ' cle=$1`
          opt_pgine $2 $FS $TY $pos $3 $4 ;;
     -id) opt_star $2 $FS $TY $1 $3 ;;
   esac 

   fi ;;
esac
echo ' '
\end{verbatim}
\end{figure}

Figure~\ref{opt} shows the code of this central function that performs
the selections on star numbers, references, and parameters. 
It first puts in the variable opt the option, passed as the first argument 
and tests its value. In all five cases the structure of the code is similar:
the whole action is put in a sub-shell so that the pagination made by 
{\tt more} or {\tt page} will leave the headers apparent on the top of the 
first page. The first part sends the appropriate column headers; the second 
feeds a pipe with a data file and the data are filtered by the 
corresponding awk function. The result is displayed on the standard output, 
but a copy is sent for further use to a temporary file {\it sortie.out} 
located in the directory {\sf bda/tmp}, which is recorded as the environment 
variable {\bf SRT}.
When the action is finished, the file {\it sortie.out} is read to count
the number of stars and display a message like: "Selected stars: 10".
The command {\tt zex} contains a simple {\tt if} test. If the file is 
compressed, use {\tt zcat}, else use {\tt cat}.

\begin{figure}
\caption[]{Example of a function: opt\_pgine}
\label{opt}
\begin{verbatim}
opt=$1
shift

case $opt in
-lt) (cat $ENT/$2.ent ; zex $1 | awkp $3 $4 $5 |\
        tee $SRT )| more ;;
-gt) (cat $ENT/$2.ent ; zex $1 | awkg $3 $4 $5 |\
        tee $SRT )| more ;;
 -i) (cat $ENT/$2.ent ; zex $1 | awkg $3 $4 $5 |\
        awkp $3 $4 $6 | tee $SRT )| more ;;
-eq) (cat $ENT/$2.ent ; zex $1 | awke $3 $4 $5 |\
        tee $SRT )| more ;;
-ne) (cat $ENT/$2.ent ; zex $1 | awkn $3 $4 $5 |\
        tee $SRT )| more ;;
esac

awk ' { print $1 } ' $SRT | sort -nu > $BTP/liste.noet
echo ' '
case $LNG in
  fr) echo " Etoiles selectionnees:  
        `grep -c '^.' $BTP/liste.noet`" ;;
 eng) echo " Selected stars:  
        `grep -c '^.' $BTP/liste.noet`" ;;
esac
\end{verbatim}
\end{figure}

The following figure~\ref{awk} shows a typical example of the  awk functions
used to perform the real work. The preamble BEGIN is used to recover the
arguments of the command line. The information on the data field passed
to the function are the column number (pos) of the beginning of the field,
its length in character (lon) and the cutoff value (lim). The 
substring is extracted and compared to the limit. If the test is positive,
the line is printed. The other  awk functions used in opt\_pgine differs
by the comparaison made in the function.

\begin{figure}
\caption[]{Example of a  awk function: awkg}
\label{awk}
\begin{verbatim}
awk '
BEGIN {
pos='$1' 
lon='$2' 
lim='$3'+0
}
{
n=substr($0,pos,lon)+0
if( n >= lim )
  printf "%s\n",$0  
} '
\end{verbatim}
\end{figure}

\subsection{A complementary approach}
A different solution was searched for in another approach. New commands 
with shorter and more direct scripts were written. Their names are evocative 
of the action they do, like {\tt meas} or {\tt list}. They effectively 
run a little faster and take usually less than one second, but the price 
to pay is an extension of typing. A example of the complete syntax would 
be {\tt meas ubv 1}. The chain {\tt meas} may be ascribed to a function 
key (F2 to F10) to save typing. Because these 
short scripts produce a shorter response time, a reverse policy 
has been adopted for the most common actions. It concerns the extraction
of measurements, listing of a data file on the screen, display of the
detailed content of a file and data comparaison.

A number of commands have no equivalent in command option syntax, like
{\tt carte, ximage, post}, and so on. See the Appendix A for a list and
the second part of this guide for a description of their use and
actions.

\section{Menu mode}
Because a lot of actions have been coded, it becomes difficult to remember
all commands, options and requested parameters. Therefore menus have
been designed that group actions related to the same subject, like {\tt xy}
or {\tt sbs} do, or to the same working environment, like {\tt ttr}.
The text should allow to recognize easily the purpose of the items and the
item order follows the logical order in which the work is usually done.
An initial menu {\tt go} is the starting point of BDA menu mode. It offers
a possibility to get basic information and perform various tasks without
knowing any of the command names.

The main menu usually presents a second one, more typical of the
possibilities offered to solve the problem, and a third level menu 
may appear before the task is really done. In the menu cascade, the
completion of an action with a menu brings back to the previous one.

Basic database query is possible with the main menu {\tt go} and it is
possible to obtain information without knowing anything about the database 
commands, which fills the schedule of conditions. It is however 
difficult to provide all facilities with menus, because it will lead to 
asking too many questions and need too much typing.
The present menu {\tt go} is not the best solution for querying data
and making subtle selections. The command or prompt modes are better in 
this case. However, the menu mode offers an easy access to sub-menus that 
would be called anyway, to use calibrations, to study spectroscopic binaries 
or work on cross-references. This menu mode is the best way to access them 
and forget about command names and details of the processes.

Working in the menu mode is described in part II, chapter 8.

\section{The prompt mode}
In another attempt to diminish the typing for repetitive actions and
make some functions easier, a prompt mode has been designed. The fact
that most commands querying data offer the same options and have many 
similar functionalities, makes it a little absurd to have so many command
names. This was also the reason to develop the three level structure
using options and functions.

The reverse philosophy is faster but needs more typing. The prompt mode
is an attempt to combine fast answer time and low typing. An attempt has 
been done to keep the number of files to a minimum. The program {\tt dbms} 
is a large Bourne shell script. When run, it presents a prompt {\tt bda>}
to which commands are issued. The prompt is then modified and contains the
command name. An agent has been called, in some similarity with Simbad 
vocabulary. To the agent prompt, one can enter star numbers, options
and parameters, a datatype, depending on the context. One can enter 
{\tt help} at any prompt to get a list of the valid answers. The agent
{\tt help} entered at the bda prompt opens a new window which allows to
keep it in parallel. The help presents a list of all agents, datatypes
and information on each.

At a certain level of complexity, it becomes again difficult to present
a simple syntax and easy methods to execute tasks which requires
complex selections, with several verbs and a variable number of parameters.
Therefore, in this first version of {\tt dbms} the command mode syntax
is used for performing selection, because it is finally the more compact.
However the {\tt select} agent allows to use a syntax like 
{\tt measure ubv> v < 7.5} to select stars with V magnitudes less than 7.5
from the UBV data file.

The complexity is also in the parsing program that analyse the entry line.
The shell has been restricted to some standard style. Maybe that 
writing the program with Perl would bring more facility to do
complex entry line parsing.

\section{The user graphical interface mode}
Finally the only way to avoid much typing is to use a graphical interface.
A user graphical interface is being developed with the really fantastic tool   called OpenWindows Developer's Guide 3.0, which allows to build windows, 
locate the widgets (choice panel, buttons, menus and so on), and create links
with actions. The first version has been made for Open Look with SUN's Xview 
library. It offers a query mode based on the use of choice settings, command 
buttons and menus. It would however be useful to have a version of this 
interface done with the X Intrinsics Toolkit to port the interface 
to workstations using the Motif widgets.

The basic concept of the window presentation is to have a selection
panel for choosing datatypes and command buttons or menus to perform
the actions. The command part is a pile of five panels called:
Interrogation, Selection, Application, Documentation and Development.
The first command panel only has been completed and it allows to query 
measurements, list files, query the bibliography and references. Short 
on-line help explaining how to use the interface and each button has been 
prepared. It is activated in the standard manner, i.e. by setting the 
mouse pointer on the button and pressing the keyboard key $<$help$>$. 
Parts of the other panels have been realised to date. 


\chapter{Installing BDA}

\section{From tape to disk}

Copies of the database are available for SUN and DEC Unix worksations.

The tape has been created on SUN with the command 
{\tt tar cfb /dev/rst1 126 .}. The information is thus blocked
with a blocksize of 126.

To install the database, create a directory called {\sf bda} ({\tt mkdir bda}),
move to it ({\tt cd bda}) and tar the tape ({\tt tar xbf /dev/unit 126 .})

{\sf bda} is the top directory which contains all the sub-directories. 
It also contains also a file named {\it .bdarc} (the dot is important!) 
which collects the environment variables and path. In particular, 
one has to write in the first line of this file the path to {\sf bda}.
The present one is {\sf /home/vaud/mermio/bda}. {\sf /home/vaud/mermio/}
has to be replaced by the new path.

\section{The environment}
The file {\it .bdarc} in the directory {\sf bda} contains environment 
variables which have to be adapted according to your configuration. 
You have to edit this file and replace the parameters with those 
corresponding to your configuration.

\subsection{Definition of environment variables}

\begin{itemize}
\item setenv BDA /home/vaud/mermio/bda
\item setenv SRC 7                    
\item setenv OCL 8 
\item setenv CUT 25        
\item setenv RSRC 5                     
\item setenv ROCL 6                  
\item setenv LCUT 25      
\item setenv LNG eng  
\item setenv DEFTYPE ubv
\item setenv EDT vi   
\item setenv PLOT sm                  
\item setenv DEVICE "X11 -geometry 640x600+500+270 -bg white -fg black"
\item setenv PRINT "postland sp2"    
\item setenv HELPPATH \$BDA/xhelp
\item setenv MANPATH \${MANPATH}:\${BDA}/man
\end{itemize}

\subsection{Meaning of environment variables}
\begin{itemize}
\item {\bf BDA}  is the access path to the root directory of the database 
{\sf bda/}
\item {\bf SRC}  is the number of elements of that path to {\sf ngc}, {\sf ic} or {\sf anon}
\item {\bf OCL}  is the number of elements to the cluster name
\item {\bf CUT}  is the number of characters of the path until {\sf bda/}.

The data for NGC 2287 are located in the directory: {\sf bda/ngc/2287}.
Therefore, depending on the access path to {\sf bda}, the variable
{\bf SRC}, {\bf OCL} and {\bf CUT} have to be adapted.
\item {\bf RSRC} {\bf ROCL} and {\bf LCUT} represent the same things, but 
are used by C programmes. They are presently different, this depends on the 
actual installation of each machine and the links between disks and directories.
\item {\bf LNG} is the language used. The two possible values are {\bf eng}
and {\bf fr}. 
By setting {\bf LNG} to {\bf eng}, most dialogs will be in english, instead 
of french. Sorry for the remaining French.
\item {\bf DEFTYPE} sets the default datatype when entering a command that
needs a datatype without argument.
\item {\bf EDT} contains the editor to use when the work or a menu item propose
the edition of a file.
\item {\bf DEVICE} is the kind of device used for the graphics output on the screen
\item {\bf PRINT} contains the name of the printer device.
Both variables are used by macros and C programs.
\item {\bf HELPPATH} is the path to the on-line help related to the
graphical user interface
\item {\bf MANPATH} this line adds the path to BDA man pages to the usual
MANPATH.
\end{itemize}

\subsection{Checking the installation parameters}
A shell program has been prepared to test that the values have been correctly 
assigned. To use it, move to the directory of NGC 2287 (from {\sf bda}: {\tt cd
ngc/2287}) and enter {\tt testenv}. Each option displays the current value 
of the parameter, the expected chain and that obtained.

Once you have set the path for BDA, most shell scripts and programs
use this path as \$BDA or are recovered by the function {\tt getenv()}.

\subsection{The file {\it .bdarc}}
The file {\it .bdarc} contains also the path to directories with executable
programs or commands:

\begin{itemize}
\item set path = (\$path \$BDA/bin)
\item set path = (\$path \$BDA/progf )
\item set path = (\$path \$BDA/progc )
\item set path = (\$path \$BDA/progs )
\item set path = (\$path \$BDA/progx )
\item set path = (\$path \$BDA/graphic )
\end{itemize}

\subsection{The file .alias.bda}
The file {\it .alias.bda} contains some aliases that are used to move
more easily within the database. Appendix G lists the most useful.
Any other alias may be added in this file.

\section{Other settings} 

\subsection{The file {\it.cshrc}}

The simplest way to go to bda is to add in the {\it .cshrc} file of 
each user an alias like:\\
\noindent {\tt alias bda 'cd /"path"/bda ; source .bdarc ; 
source .alias.bda'} \\
\noindent {\tt cd} will move you to the basic directory {\sf bda}, {\tt source .bdarc} will initialise the environment variables, {\tt source .alias.bda}, useful aliases. 

\noindent The {\it .cshrc} file should also contain the following instructions:\\
     {\tt set lpath = ()}\\
     {\tt set lcd = ()}\\
     {\tt set cdpath = (.. ../..)}\\

\subsection{The .login file}
The {\it .login} file  should  contain the terminal  code according to the  
Termcap database if different from "sun".

\noindent To access BDA man pages, the path should be added:

{\tt setenv MANPATH \$\{MANPATH\}:\$\{PATH\}/bda/man}

\subsection{The .logout file}
The {\it .logout} file should contain the command {\tt recompress} located 
in {\sf bda/bin} which compresses at the end of session the files that
have been uncompressed for various reasons.

\subsection{The .openwin-menu file}
It may appear convenient to start the various query modes with a choice in
the main menu. If the group shown in Figure~\ref{menu} is included in the 
{\it .openwin-menu} the menu item "Database" will appear in the main menu 
and a submenu will offer the five choices: Command mode, Menu mode, Prompt 
mode, GUI mode and NCSA Mosaic BDA front page.

\begin{figure}
\caption{Menu items}
\label{menu}
\begin{verbatim}
"Database"           MENU
"Command mode"       DEFAULT       chdir $BDA ; 
        exec $OPENWINHOME/bin/xview/shelltool tcsh 
          -WI $BDA/bda.icon -WL "BDA Comm" 
          -Wl "Open Cluster Database Command"
"Menu mode"		
        exec $OPENWINHOME/bin/xview/shelltool go 
          -WI $BDA/bda.icon -WL "BDA Menu" 
          -Wl "Open Cluster Database Menu"
"Prompt mode"                      chdir $BDA/ngc/0103 ;
        exec $OPENWINHOME/bin/xview/shelltool dbms 
          -WI $BDA/bda.icon -WL "BDA Prompt" 
          -Wl "Open Cluster Database DBMS"
"GUI mode"               chdir $BDA/ngc/0103 ; exec xbda
"Mosaic"                 xmosaic $BDA/html/bda.html
"Database"            END
\end{verbatim}
\end{figure}

Of course the action should be put on a single line, which is not possible here.

\section{Compilation of the C or Fortran programs}
The code sources are distributed in several directories:

\begin{itemize}
\item \struc{bda/progf}{Fortran codes and executables;}
\item \struc{bda/progc}{C codes;}
\item \struc{bda/progs}{programs using the SM library;}
\item \struc{bda/progx}{graphical interface C source;}
\item \struc{bda/graphic}{source and exectuable of the program diag.}
\end{itemize}

A file called {\it Makefile} has been prepared and is to be found in each 
one of these directories. Most programs are simply compiled like:\\

{\tt f77 -o astrom astrom.f}  or  {\tt cc -o ajzero ajzero.c}\\

The name of the executable is identical to the programme name, without the
suffix .f or .c.

A few of them need either a routine or a library. See the {\it Makefile} files
for more information.

\section{Graphics package (SM 2.3)}
The SM package has a {\it .sm} initialisation file. There is an example of 
such a {\it .sm} file (but it is not used) in the main directory {\sf bda}.
The line starting with {\bf macro2} defines the path to the user's default
file containing the user's macros. 

Part of the database graphics is done with SM macros and these are
located in the file {\it default} in the directory {\sf bda/smacro}. To 
use the database graphics you have to indicate the right path in macro2, 
or to add the content of BDA {\it default} file to yours.

\section{Tutorial of the database}
The present introduction is located in the file {\it manuel.tex} located
in the directory {\sf bda/introduction}. The file {\it manuel.ps} is ready
for printing on a postscript printer. A hypertext version is available under
NCSA mosaic and can be reached from BDA front page.

\section{References}
The database has been described in two issues of the Information Bulletin
of the Strasbourg Data Center.

\noindent Mermilliod J.-C. 1988, Bull. Inform. CDS 35, 77\\
Mermilliod J.-C. 1992, Bull. Inform. CDS 40, 115\\

\section{Language of the database}
EWhenthe language is set to English ({\bf eng}) you shoud not find 
remaining dialogs in French. The English version has not yet been 
reviewed by English speaking persons, so you may find awkward phrasing
or bad translation. Please send me your comments to improve it, and
remarks on this user's guide, for unclear explanations, other things you
would like to find in it, and so on.

The number of on-line help texts displayed by the command {\tt emploi} 
translated in English is growing. If some yet non-existing file would
be very useful to you, pleased drop me a mail, I'll do my best to
prepare it.

\section{Comments and questions}
I am open to any comments, questions, suggestions, criticism, to improve
the portability (I had a number of change to do to accomodate to DEC
Unix), make the installation more easy and versatile to take into account
various situations, and to add more application software to make the 
database more useful. 

\noindent My E-mail address is: mermio@scsun.unige.ch

\noindent I wish you much pleasure in using BDA.

\begin{thebibliography}{}
\bibitem{acv}
Alter G., Ruprecht J., Vanysek V. 1970, Catalogue of Star Clusters and
Associations. Akademiai Kiado, Budapest 
\bibitem{fr88}
Frot P. 1988, 3 Syst\`emes Experts en Turbo C (Sybex, Paris)
\bibitem{hm85}
Hauck B., Mermilliod M. 1985, A\&AS 60, 61
\bibitem{jm53}
Johnson H.L., Morgan W.W. 1953, ApJ 117, 313
\bibitem{ly}
Lyng{\aa} G. 1987, Catalogue of open clusters parameters (5th ed.), CDS
\bibitem{jcm72}
Mermilliod J.-C. 1972, Bull. Inform. CDS 3, 19
\bibitem{jcm73}
Mermilliod J.-C. 1973, Bull. Inform. CDS 4, 22
\bibitem{jcm76a}
Mermilliod J.-C. 1976a, A\&AS 24, 156
\bibitem{jcm76b}
Mermilliod J.-C. 1976b, A\&AS 26, 419
\bibitem{jcm79a}
Mermilliod J.-C. 1979a, A\&AS 36, 163
\bibitem{jcm79b}
Mermilliod J.-C. 1979b, Bull. Inform. CDS 16, 2
\bibitem{jcm84a}
Mermilliod J.-C. 1984a, Bull. Inform. CDS 27, 141
\bibitem{jcm84b}
Mermilliod J.-C. 1984b, Bull. Inform. CDS 26, 9
\bibitem{jcm86a}
Mermilliod J.-C. 1986a, Bull. Inform. CDS 31, 175
\bibitem{jcm86b}
Mermilliod J.-C. 1986b, A\&AS 63, 293
\bibitem{jcm88a}
Mermilliod J.-C. 1988a, Bull. Inform. CDS 35, 77
\bibitem{jcm88b}
Mermilliod J.-C. 1988b, in Astronomy from Large Database I, Eds F. Murtagh 
\& A. Heck, ESO Conf. and Work. Proc. no 28, p. 419
\bibitem{jcm92a}
Mermilliod J.-C. 1992a, Bull. Inform. CDS 40, 115
\bibitem{jcm92b}
Mermilliod J.-C. 1992b, in Astronomy from Large Database II, Eds A. Heck 
\bibitem{mn89}
Mermilliod J.-C., Nitschelm C. 1989, A\&AS 81, 401
\bibitem{mmm}
Meynet G., Mermillod J.-C., Maeder A. 1993, A\&AS 98, 477
\bibitem{nm90}
Nitschelm C., Mermilliod J.-C. 1990, A\&AS 82, 331
\bibitem{rbw}
Ruprecht J., Balasz B., White R.E. 1981, Catalogue of Star Clusters and
Associations, Supplement I. Ed. B. Balasz, Akademiai Kiado, Budapest
\bibitem{vlf}
van Leeuwen F. 1985, in IAU Symp. no 113, Eds J. Goodman \& P. Hut (Reidel,
Dordrecht), p. 579
\end{thebibliography}


\part{HOW TO USE BDA}

\chapter{The command mode}
\section{Basic Commands}

The following description provides many examples of the command 
usage and some comments.

\subsection{Content of BDA}
To know which NGC clusters exist in BDA move to {\sf bda} and enter:\\
\expl{$>$ ngc}{}\\
\expl{$>$ ls}{}\\
\noindent This will produce a list of all NGC clusters appearing as subdirectories in the {\sf bda/ngc} directory. Similarily, enter:\\
\expl{$>$ ic}{} \\
\expl{$>$ ls}{} \\
\noindent or\\
\expl{$>$ anon}{} \\
\expl{$>$ ls}{}\\
\noindent to look at the {\sf ic} and {\sf anon} directories respectively.

\subsection{Moving within BDA}
To move to a cluster directory, simply enter:\\
\expl{$>$ ngc 2287}{}\\
\noindent when you are in {\sf bda} \\
\expl{$>$ ic 1805}{will put you in the cluster IC 1805}\\
\expl{$>$ anon mel022}{will transport you to the Pleiades directory.}

\noindent These motions are possible if cdpath has been set to ../..
as explains in 6.3.1. A few aliases have been created (see Appendix G)
to reach the directory of nearby clusters, and you can create more.

\subsection{List of data files}

\noindent When in a cluster directory, you can look at the data file:\\
$>$ {\tt ls} or {\tt ls -l}\\
\noindent Do not forget that the files are compressed, so they have the 
suffix ".Z". You can uncompress them with the {\tt uncompress} system 
command or with {\tt testfZ} which uncompresses the file and writes its name 
and path in the file {\it fich\_compr} located in {\sf bda/gestion/}. 
At the end of the work session the files are automatically recompressed if
the command {\tt recompress} has been added in the {\it .logout} file.

\expl{$>$ testfZ mkk.don}{}

\subsection{Looking at the file content}
Several commands have been prepared to handle compressed files.
The syntax is simply:

\noindent {\tt > command} {\it filename}

\expl{$>$ scl}{display a scale and the first ten lines of a file}

\expl{$>$ zp}{list a compressed file, one page at a time}

\expl{$>$ zhd}{list the first 15 lines of a file}

\expl{$>$ ztl}{list the last 15 lines of a file}

\expl{$>$ zpr}{print a file with {\tt lpr}}

\expl{$>$ wz}{count the number of lines}

\expl{$>$ vz}{uncompress a file and call the vi editor}

\noindent Note that the suffix ".Z" should not be present. Adding it 
will produce an error.

\section{Manual and help}

\subsection{Manual}
\paragraph{The command man}
Man pages have been written for many commands, and are accessible with 
the system command {\tt man}, provided the {\bf MANPATH} has been 
correctly set.
You may consult the manual information interactively from anywhere 
in the data base. The manual files are located in the directory 
{\sf bda/man/man1}. Use {\tt man -t command-name} to print the 
man pages.

\noindent {\tt man bda} produces a list of the commnads for which 
man pages are available.

\paragraph{ The command mnl}
You may consult the manual information interactively from anywhere 
in the data base. The manual files are located in the directory 
{\sf bda/manuel}. They all have the suffix ".mnl". After displaying 
the manual information, the shell script asks you if you want to see the 
source code of the command or function. These description are in French.

\noindent Interactive form for the command {\tt mnl}:

\expl{$>$ mnl ubv}{} 

{\tt Do you want a listing of the code (y/n) ?} {\tt y}

\subsection{Documentation}
A number of texts describe the database and they can be reached by 
entering {\tt dsc} which will display a menu. A shorter way is to enter 
directly {\tt doc} or {\tt help}. 

Most of these texts are still in french. Thus, they have to be 
translated, and also improved to give better information. The texts may 
be revised to take into account your experience and the problems you 
may face. 

A hypertext description of the database and commands is being developed.
The front page of the database is displayed with the command 
{\tt xhelp bda}. The user's guide is also accesible in hypertext form
from the front page.

If the command {\tt xhelp} does not work correctly, check the way NCSA
mosaic is called. Move to the directory {\sf bda/bin} and edit the
file {\it xhelp}.

\section{Command names, options and filenames}

\subsection{Command names}
The command {\tt ncm} display directly the small menu which drives the 
information on command names:

\expl{$>$ ncm}{display a menu arranged by datatypes}

\expl{$>$ ncm ddo}{display directly the information on {\tt ddo}}

\noindent The command names are listed in Appendix A.

\subsection{Command capabilities}
The description obtained with the command {\tt emploi} offers the best 
way to determine which syntax should be used to get the desired results:

\expl{$>$ emploi ubv}{display a small menu}

\begin{verbatim}
          which item?                4
          back to the menu (y/n)?    y
          which item?                7
          back to the menu (y/n)?    n
\end{verbatim}

\noindent The same menu may be displayed with the option -h: {\tt ubv -h}.
A hypertext version is available with NCSA Mosaic: the command name is 
{\tt xhelp} and it takes a command name as argument, like in {\tt xhelp ubv}.


\subsection{Option meaning}
The command "option" informs you about the option meaning:

\expl{$>$ option}{display the list of options}

\expl{$>$ option -d}{provide information on "-d" only}


\subsection{File names}
The command {\tt fichier} behaves like the previous one:

\expl{$>$ fichier}{display the list of filenames}

\expl{$>$ fichier wal.mes}{tell the content of that file}

\subsection{File fields}
The command {\tt field} informs on the field names used as parameters for 
selection or sorting.

\expl{$>$ field ubv}{display the data field in the file {\it ubv.peo}. It
gives the field designation, the position of the beginning and its length}
\expl{$>$ field ubv v}{display the information for v only}

\subsection{Datatypes}
The command {\tt dtype} explains which are the data concerned by a given
datatype.

\expl{$>$ dtype ubv}{display an explanation line concerning the ubv datatype}

This command may also be used to search for the datatype cooresponding to
some kind of data.

\expl{$>$ dtype UBV}{display the various datatypes that correspond to UBV
and related data}

\expl{$>$ dtype CCD}{display the various datatypes that correspond to CCD
data}

\section{Bibliography and References}

\subsection{The modern bibliography}
The command {\tt bib} handles the bibliography from 1969 to the
present day created from the Astronomy and Astrophysics Abstracts. 
The following example presents a typical interactive session; it corresponds 
also to the interrogation started from the main menu.

\expl{$>$ bib}{}
\begin{tabbing}
\hspace{2cm} \= {\tt which year?}  \hspace{2cm} \= {\tt 88} \\                                
\> {\tt which subject?} \> {\tt lithium} \\
\\
\> {\tt another year?}  \> {\tt 87} \\
\> {\tt another subject?} \> $<$return$>$ \\
\\
\> {\tt another year?} \> $<$return$>$ \\
\> {\tt another subject?} \> {\tt Coma\_Ber} \\
\\
\> {\tt another year?}  \> {\tt 0} (or n or fin) \\
\end{tabbing}

The first request searches the 1988 bibliography for references concerning
the lithium, the second one searches the 1987 bibliography for the same
topics, and the third one explores the bibliography of 1987 again, but
for references about the Coma Ber cluster.

This interrogation allows you to handle only one year and one 
subject at a time, which is limiting. Therefore, several options have 
been introduced to improve the efficiency. The option "-eq" permits  
to select one year and up to three different subjects simultaneously 
while the options "-i -lt -gt" allow to work within two limiting years. 
They also accept up to three keywords.

\noindent $>$ {\tt bib -i 84 88 Hyades Pleiades lithium}\\
\expl{\ }{display all references concerning the 
problem of lithium abundances in the Hyades and the Pleiades clusters 
published from 1984 to 1988.}

\expl{$>$ bib -eq 86 752 lithium}{displays the references relating to the 
lithium abundance in NGC 752 published in the year 1986.}

\expl{$>$ bib -gt 86 752 lithium}{display the references published
in 1986 and later}

\expl{$>$ bib -lt 75 Tr16}{display the references published between
1969 and 1975 for Trumpler 16}

\expl{$>$ bib -n Claria -eq 88}{list the references published in 1988
by Clari\'a.}

\expl{$>$ bib -k}{display the keywords which may be used. Cluster names are 
not listed in this file, but you can include all NGC and IC numbers as keywords.
Generally "NGC" and IC" are not necessary. The names of anon clusters are 
spelled as in the {\sf anon} directory, with the difference that the first character is upper case and there is no left padding with zeros.}

\expl{$>$ bib -km cluster}{display a list of the keywords which 
contains the string "cluster" (many keywords are composite).}

\expl{$>$ bib -ki stellar}{list all keywords beginning with the 
word "stellar".}

\expl{$>$ bib -ke mass}{list all keywords ending with "mass".}

\expl{$>$ bib -t 93}{count the number of references for the year 1993}

\expl{$>$ bib -c 93}{give access to the keyword list of the
references listed by the previous search. This command should therefore
be issued after a successful search only}

\subsection{The Budapest bibliography}
Information from the Catalogue of Star Clusters and Associations 
(Alter et al. 1970) and its first supplement (Ruprecht et al. 1981), available 
on tape, has been included for NGC, IC and anon clusters. The 
catalogue has been separated into smaller files, one for each cluster, 
which bears the name {\it bdp.cat}. 

The command {\tt bdp} offers a few options to select information from 
the Budapest bibliographic catalogue for one or several consecutive 
years with the options -eq, -gt -lt and -i, according to an author's name 
or a given topic (option -s). The abbreviating system is complex 
within this catalogue. The abbreviations for the subjects, the
journals and publications are described in the hypertext presentation of
he bibliography.

\expl{$>$ ngc 0129}{move to NGC 129. Notice the leading 0}

\expl{$>$ bdp}{list the complete file}

\expl{$>$ bdp tr16}{lists the bibliography for the cluster Trumpler 16. 
Can be called from anywhere in the database, for any cluster, even for
those which do not have their own directory.}

\expl{$>$ bdp -eq 1970}{Select the references for 1970}

\expl{$>$ bdp -gt 1960}{produce a listing of all references   
published later than 1960}

\expl{$>$ bdp -i 1950 1960}{list the references for the interval given.}

\noindent The option -s is used to make selections on:

\expl{$>$ bdp -s Trumpler}{a name (case is not important)}

\expl{$>$ bdp -s cepheids}{a subject}


\subsection{The information from Lyng{\aa}'s catalogue}
The information and global parameters from Lyng{\aa}'s (1987) catalogue 
are recorded in each cluster directory. The output format 
is similar to that of the edition on microfiche. In addition, it is 
possible to retrieve specific information according to the option 
used with the command {\tt lyn}.

\expl{$>$ ngc 2287}{move to the cluster NGC 2287}

\expl{$>$ lyn}{display the information for NGC 2287}

\expl{$>$ lyn 2516}{same, but for NGC 2516}

\expl{$>$ lyn -p}{summarize the parameters of NGC 2287:
distance, reddening and age}

\expl{$>$ lyn -c}{give the equatorial and galactic coordinates}

\expl{$>$ lyn -d}{display information on the diameter}

\noindent The syntax {\tt lyn cluster-name} is used to get the information
for clusters which do not have their own directory.

\subsection{The references}
The references may be obtained in several different ways with the command 
{\tt ref} which starts an interactive program. 
The references are written in distinct and uncompressed files located in 
the directory {\sf bda/references}. The filename are like "data type.ref", 
e.g. {\it ubv.ref} or {\it vsini.ref}. 

\expl{$>$ ref}{}

\begin{verbatim}
          For which kind of data?       vsn
          Give the reference number:      1
          Another reference?             10
          Another reference?             -1
          Precise your choice:          orb
          Give the reference number:      5
          Another reference?              0
\end{verbatim}

The answer "-1" displays the list of data types for which references
are available. It also permit to change the data type for further
query.

\expl{$>$ ref -h}{display information on the command capabilities.}

\expl{$>$ ref -n Hoag ubv}{display the references containing the 
name of Hoag among the authors' names in the UBV references.}

\noindent The next options work only with cluster data file, so you need 
to move to a cluster directory:

\expl{$>$ ngc 2287}{}

\expl{$>$ ref -t ubv}{list all references relating to the UBV 
photoelectric data}

\noindent The option -m is used to get the references after a data
query or a selection:

\expl{$>$ mk 103}{select the MK data for star \# 103}

\expl{$>$ ref -m}{list the corresponding references}

\expl{$>$ vsn V -gt 280}{select large Vsini}

\expl{$>$ ref -m}{list the references}

\subsection{Cluster numbering system}
\noindent The reference for the numbering system adopted in the
database is obtained with {\tt sysno}. 

\expl{$>$ sysno}{display the reference of the numbering system for the current
cluster}

\expl{$>$ sysno 2516}{display the reference of the numbering system for
NGC 2516 from anywhere in the database}

\subsection{Ongoing observations}
The command {\tt baas} simply displays the {\it baas.dat} file which
contains information on ongoing work and observations. It does not have
any option.


\section{Remarks}

The command {\tt rem} is used to query the remarks. It is most often used
with star numbers and has only one option "-k", used to search for a string 
of characters in the remark file.

\expl{$>$ ngc 2516}{}

\expl{$>$ rem}{list the file}

\expl{$>$ rem 10 27 29}{search for the stars number 10 27 and 29}

\expl{$>$ rem -k Delta Scuti}{list the stars classified as Delta Scuti 
variables.}


\section{Cross-Identifications}

\subsection{The cross-reference tables}
BDA contains cross-reference tables which provide the 
cross-identifications between many numbering systems. Information may 
be retrieved in both senses: either from the original numbering systems to
that adopted in BDA or from BDA's system to any other one.

\noindent If you are not in the directory of NGC 129, enter:

\expl{$>$ ngc 0129}{}

\expl{$>$ tab -r}{list the references of the numbering systems included in 
the cross-reference table.}

\expl{$>$ tab}{list the cross-reference table.}

\expl{$>$ tab 16 20 30}{display the line for stars 16 20 and 30.}

\expl{$>$ tab -r 4}{remind you of the reference corresponding to column 4.}

\expl{$>$ tab -l 3}{select the entries corresponding to column 3.}

\expl{$>$ tab -s 3}{extract couples of cross-identifications: BDA number -
 column 3 number, sorted by column order.}

\expl{$>$ tab -i}{print the references and the cross-reference table.}

\expl{$>$ tab -c}{start a cross-identification program in the reverse sense: 
from any column to the adopted system.}

\begin{verbatim}
          Give the reference number:   0 
 
                  (0 displays the reference list, when you need it.)

          Give the reference number:   2
          Which star?                  5
          Another star?               10
          Another star?               -1
          Give the reference number:   3
          Which star?                 20
          Another star?                0
\end{verbatim}

\noindent The entry "-1" allows to change the reference searched, here one
moves from reference 2 to 3.

\expl{$>$ tab -s}{produce an output sorted according to the selected column. 
May be redirected to any file.}

{\tt sort on which column?}   {\tt 2}

\noindent Special numbering systems may also be interrogated:

\expl{$>$ ngc 0457}{}

\expl{$>$ tab -k 22- 75}{look for "22- 75" in the table}


\subsection{The cross-identifications}

Information on cross-identifications with astronomical catalogues may be 
obtained with the command {\tt idm}. The cross-IDs are split into two files.
The first one contains the common identifications (HR, HD, DM, NLS, SLS,
GCVS), and the second one the IDS, ADS, SAO, and miscellanous IDs.

\noindent If you are not in the directory of NGC 129, enter

\expl{$>$ ngc 0129}{}

\expl{$>$ idm1}{list the first file.}

\expl{$>$ idm2}{list the second file.}

\expl{$>$ idm 164 170}{display the content of both files for both stars.}

\expl{$>$ idm1 164 170}{display the content of the first file only.}

\expl{$>$ idm -sort hd}{list the entries sorted by HD numbers. Arguments are: 
hd dm hr/bs sao ids ads lss nls}

\expl{$>$ idm 170 236429 113}{identifications can be mixed: star number 
HD or BS}


\subsection{The identification of double star components}

Systematic cross-referencing has been done for the identification 
of double star components of the IDS (1984.0) catalogue and cluster 
stars. This information may be reached with the command {\tt ids}. 

\expl{$>$ ngc 2422}{}

\expl{$>$ ids}{list the file.}

\expl{$>$ ids 45}{list all components associated with star no 45.}

\expl{$>$ ids ads -eq 6216}{list the lines containing the ADS number 6216.}

\expl{$>$ ids sep -lt 10}{select the systems having a separation 
smaller than 10 "arc. Options "-gt" and -i" can also be used.}

\expl{$>$ ids mult -eq AB}{select the entries for AB components.}

\section{Data queries}
Most commands that query data use the same syntax.
Without argument, the command lists the datafile; with one argument,
it returns the star data if the argument is numeric or execute the
appropriate option. 
To make a selection, enter the command followed by the parameter name
on which the selection is made, the comparator and the value:

{\tt command parameter comparator value}

\noindent The definition of the parameters for each datafile is given in
Appendix E. It may be obtained simply with the command {\tt field} described
above. The discussion of the command {\tt ubv} will illustrate this
syntax, which is the same for all selections.

\subsection{General case: the command ubv}

Start by going to the directory of the cluster NGC 129

\expl{$>$ ngc 0129}{}

\expl{$>$ ubv}{all commands entered without argument 
produce a listing of the entire file, one page at a time.}

\noindent Query by star number:

\expl{$>$ ubv 200}{list the UBV data of star 200 (DL Cas)}

\expl{$>$ ref -m}{display the references of these data.}

\expl{$>$ ubv -r 200}{equivalent to the preceding two commands.}

\expl{$>$ ubv 164 170 200}{list the UBV data of the three stars.}

\expl{$>$ ubv -u 1341 200}{select the data for star 200 from reference 1341.}

\expl{$>$ ubv -a 200}{compute the mean value.}

\expl{$>$ ubv hd -id 236429}{query by HD number.}

\expl{$>$ ubv -f no.lst}{search for the UBV data for a list of stars
contained in a file, here {\it no.lst}.}

\noindent Selection on UBV parameters or reference:

\expl{$>$ ubv v -lt 9.0}{select the stars with V $<$ 9.0.}

\expl{$>$ ubv b-v -gt 1.8}{select stars with B-V $>$ 1.8.}

\expl{$>$ ubv no -lt 10}{list the stars with numbers less than 10}

\expl{$>$ ubv ref -eq 1341}{select data from the reference 1341.}

\expl{$>$ ubv ref -ne 1341}{select data from any reference except 1341.}

\noindent Options common to most data commands:

\expl{$>$ ubv -d 10}{give the list of sources that have 10 stars or more 
and the number of stars.}

\expl{$>$ ubv -dr 10}{produce the same output as "ubv -d", but list also 
the corresponding references.}

\expl{$>$ ubv -nb 3}{list the stars with at least three sources.}

\expl{$>$ ubv -sort v}{produce an output sorted by V mag.}

\expl{$>$ ubv -t}{count the number of measurements and stars.}

\expl{$>$ ubv -h}{display the description of the command.}

\noindent Option specific to the {\tt ubv} command:

\expl{$>$ ubv -y}{give the results of the UBV data comparaison.}

\expl{$>$ ubv -plt}{plot photometric diagram.}
 
\noindent Most other commands handling photometric data (see Appendix A,
table A.1)) behave similarily. This is especially true for the commands 
that handle UBV type data {\tt pgh ccd sit cam}. The options -d, -dr, -nb, 
-t and -h are common to commands handling referenced data.

\subsection{The commands ubvm and cmd}
The commands {\tt ubvm} and {\tt cmd} have the special keyword "ns" which
means number of sources, because the data source is replaced by the
number of sources used in the computation of the mean.

\expl{$>$ cmd ns -gt 3}{list the mean values based on 3 sources or more.}

\subsection{The command hrd}
The datafile for the type hrd contains in addition to the dereddened UBV 
colours a column indicating the individual colour excesses. It is possible
to select the stars according to E(B-V).

\expl{$>$ hrd ebv -gt .65}{select the stars with E(B-V) $>$ 0.65.}

\subsection{The commands mk, mks and spt}
The parameter "ts" is specific to the command {\tt mk}, {\tt mks} and 
{\tt spt} and the options -eq, -lt, -gt and -i have their standard meaning.
The parameters no and ref can also be used to perform selections.

\expl{$>$ ngc 2516}{}

\expl{$>$ mk ts -eq B8 V}{select the stars with a spectral type like B8 V}

\expl{$>$ mk ts -lt B5}{select the stars earlier than B5}

\expl{$>$ mk ts -gt K0}{select the stars later than K0}

\expl{$>$ mk ts -i B8 A2}{select the stars with types between B8 and A2}

\expl{$>$ mk ref -eq 999}{select the classifications from reference 999}


\subsection{The commands coo and pos}

The commands {\tt coo} and {\tt pos} handle the coordinates. The option 
"-eq" changes the equinox from 1950 to any other one, and has the
meaning of "equinox" only for these two commands. The syntax is the same for
both:

\expl{$>$ ngc 0129}{~}

\expl{$>$ coo -eq 2000 200}{precess the coordinates to 2000}

\expl{$>$ pos -eq 1975 200}{precess the coordinates to 1975}

\noindent It should not be confused with

\expl{$>$ coo ref -eq 128}{Select coordinates from source 128 (GSC).}

\noindent It is possible to look for the stars which are in a given radius 
around a center specified by its right ascension and declination.

{\tt coo R -lt 1.5 6 44 30 -20 40 00}\\
\expl{~}{list the stars within 1.5 'arc around the position 
RA = 6$^{\mbox h}$ 44$^{\mbox m}$ 30$^{\mbox s}$, DEC = -20$^o$ 40' 00"}

\expl{$>$ coo -k 0 27 1}{search the stars having a coordinate between 
0h 27m 10s and 0h 27m 19s. The same can be done in declination.}

\noindent The option -k searches for strings of characters, even if they 
contain white space.

\expl{$>$ coo -info 10}{list information on reference 10}

\subsection{The commands apm and rpm}
These commands manipulate the absolute and relative proper motions.
These data are not yet very numerous, but their importance will grow
with the results of Hipparcos.
The option -info is interesting: it displays the information collected
by van Leeuwen on proper motion studies. The argument can be a reference
number or a cluster name.

\expl{$>$ apm -info 180}{display the information for reference 180}

\expl{$>$ apm -info NGC 2682}{display the information for NGC 2682}

\subsection{The command irv}
The command {\tt irv} has a few specific options to select and plot
radial velocities in function of observation time or Julian date.
The option -dn was made to look at the observations of each night to
check visually the stability of the radial velocity system.
The option -v is used to plot the radial velocities of a star in function
of the Julian date and -bin is used to extract the observations of one
star and start the program to determine an orbit. These actions are grouped
in the menu {\tt sbs}. The program is described in part III.

\expl{$>$ ngc 2516}{}

\expl{$>$ irv -dn}{produce a list of the observing nights, ordered by 
increasing Julian dates, with a minimum of 5 observations. These nights 
are numbered for display and plotting facilities.}

\begin{verbatim}
          Which night number?              5
          Do you want to plot the RVs?     y

          What do you want to do?          m
          Give the mean cluster RV:       15

          What do you want to do?          f

          Which night number?              0
\end{verbatim}

\noindent The level of selection may be modified by entering the new limit:

\expl{$>$ irv -dn 10}{}

\noindent The rest of the process is the same as the one described just above.

\expl{$>$ irv -v 5}{select the observations relating to star 
5 and plots the data versus the Julian dates.}

\begin{verbatim}
          Do you want to plot the RVs?     y

          What do you want to do?          m
          Give the mean cluster RV:       15

          What do you want to do?          f
\end{verbatim}

\expl{$>$ irv 5 $|$  page}{pipe with {\tt page} or {\tt more} for long output}

\expl{$>$ irv jd -eq 40191}{select and display the observations made
on  JD 24440191.}

\expl{$>$ irv -bin 25}{select the measurements of star 25 and run the
program to compute the binary orbit, for a SB1}

\expl{$>$ irv -bin2 34}{select the measurements of star 34 and run the
program to compute the binary orbit, for a SB2}

\subsection{The command prob}
The command {\tt prob} allows to make selections on the membership 
probabilities.

\expl{$>$ prob p -gt .75}{select stars with a membership probability larger than 75\%}

\noindent The options -i and -lt are also valid, as are selection on the star
numbers "no" and the reference "ref".

\subsection{The command hpd}
The command name is from the Hipad Plus$^{TM}$ the name of the Houston 
Instruments digitizing tablet used to measure the rectangular positions on 
cluster photographs. The positions were put in the {\it hipad.xy} file, as 
a distinct source with respect to author's names when the data were taken 
from the literature.

The command {\tt hpd} has a specific option -info.
It displays the entry for the given cluster in the file {\it info.hpd}
located in the directory {\sf bda/information/} containing the scale of the 
xy positions and the source of the data.
The cluster name has to be indicated on the command line.

\expl{$>$ hpd -info 2287}{extract the line relating to NGC 2287}

\expl{$>$ hpd -c}{display the number of the central star (0,0)}

\noindent Selection can be made on the x or y or both parameters (xy).

\expl{$>$ hpd xy -lt 20}{select the stars within a box of 20 units in size}

\expl{$>$ hpd r -lt 20}{select the stars within a circle of radius 20}

\noindent A cluster chart may be built from the (x,y) positions and the
magnitudes contained in the file {\it hipad.xy}. The same action may be
obtained with {\tt carte}.

\expl{$>$ hpd -plt}{plot a chart from (x,y) positions}

\subsection{The command gk and red giant data}
The command {\tt gk} displays the file containing the information on 
the cluster red giants. These files are part of an unpublished catalogue 
of about 3200 red giants in open clusters. The description of 
the record content is given in Appendix I.

\expl{$>$ ngc 0129}{move to NGC 129.}

\expl{$>$ gk}{list the file.}

\expl{$>$ gk 164}{display one star.}

\expl{$>$ gk ph -eq ubv}{select the stars with UBV data.}

\expl{$>$ gk ph -ne ddo}{select the stars without DDO data.}

\noindent Arguments for the keyword ph (photometry) are: ubv pgh gen ddo 
cmt ri egg iyz wing.

\noindent A tentative classification (ABCD) has been developed to indicate 
the evolutionary state of the red giants. (See Appendix I for further
comments. A selection can be performed on the classification with the 
keyword "cl".

\expl{$>$ gk cl -eq C}{list clump red giants}

\expl{$>$ gk cl -eq A}{list AGB giants}

\expl{$>$ gk cl -eq F}{list the field stars}

\expl{$>$ gk cl -eq sb}{list the spectroscopic binaries}


\subsection{Obtaining further information: aud}

The command {\tt aud} (for "autre donnees" = other data) obtains other 
kinds of data for a sample created with a data handling command, like 
{\tt ubv, mk, vsn} and so on. Let us first move to the NGC 2516 directory.

\expl{$>$ ngc 2516}{~}

\expl{$>$ vsn V -gt 270}{form a sample with stars having a rotational 
velocity larger than 270 km/s.}

\expl{$>$ aud mk}{list the spectral types for that sample}

\expl{$>$ aud prob}{list the membership probabilities.}

\expl{$>$ aud -s rem}{list the remarks, but the sample has been redefined 
by the output of the preceding command. This is useful whenever data are not
found for all stars.}

\expl{$>$ rem -k SB}{search for the spectroscopic binaries.}

\expl{$>$ aud vsn}{list the Vsini for the binaries.}

\expl{$>$ mk ts -eq B8 V}{select the B8 V stars.}

\expl{$>$ aud betam}{display the mean H$\beta$ values.}

\noindent The command {\tt aud} is also used to extract data for a 
predefined list of stars, or stars contained in a BDA file.

\expl{$>$ aud -f gK ddo}{display the DDO data for the red giants contained
in the file {\it gK}. Even if the file is normally compressed, do not add 
the .Z suffix}

\expl{$>$ aud -v gK ddo}{list the red giants which do not have DDO data}

\expl{$>$ aud -t cmd coo}{display the coordinates for the stars that have
mean UBV data.}

\expl{$>$ aud -v cmd coo}{list the stars that have mean UBV data, but no
coordinates. The argument to -v is a data type.}

\subsection{Obtaining several kinds of data at a time: multi}
One can obtain several kinds of data for one star in a simple way 
by using the command {\tt multi}:

\expl{$>$ ngc 0129}{~}

\expl{$>$ multi 200 ubv mk pos}{list the UBV data, spectral types 
and coordinates for star 200.}

\section{Cluster map and chart}
Cluster maps have been scanned and installed in the database. The maps
scanned (output file in the tiff format) have been displayed with {\tt xv},
captured with {\tt xwd} and compressed with {\tt gzip}. The result is that
they occupy about 10 kB on the disk instead of some 300kB in the tiff
format. They are uncompress on the fly and displayed with {\tt xwud}.
The command {\tt ximage} is doing the display business. It needs the
map author's name as argument.

\expl{$>$ ngc 3572}{~}

\expl{$>$ ximage moffat}{display Moffat's map}

\expl{$>$ ximage steppe}{display Steppe's map}

\noindent The image may be moved only, but the size cannot be changed.
A single click on the mouse left button, when the pointer is located
within the map will make it disappear. It is of course possible to
keep two maps on the screen.

\expl{$>$ xprimage moffat}{make a hardcopy of Moffat's map}

\noindent As discussed under command {\tt hpd}, the command {\tt carte}
plots a cluster chart from the (x,y) rectangular positions and magnitude.
By default, {\tt carte} looks for the file {\it hipad.xy}. If none is
existing you can enter any other ---.xy filename.

Maps and charts can be displayed simultaneously on the screen.

\section{Cluster Selection}
\subsection{The command {\tt slm}}
It is sometimes useful to have a list of those clusters which 
have many stars observed in one kind of data, or to know which 
clusters lack data. The command {\tt slm} allows this kind of 
queries. The {\sf bda} directory contains a sub-directory {\sf contenu}
 which collects files (one per data type) containing the name of the
cluster, the number of stars and the number of measurements available. 
The selection is made only on the number of stars. This command may 
be run from anywhere in the database. It always needs at least one 
argument, i.e. the data type.


\expl{$>$ slm ubv}{lists the whole file.}

\expl{$>$ slm ubv -gt 200}{select the clusters having more than 200 
stars observed in UBV.}

\expl{$>$ slm gpo}{list the file containing the information 
on radial velocities from objective prism spectra.}

\expl{$>$ slm map}{list the cluster for which a scanned map is available}

\expl{$>$ slm ubv -histo}{display the UBV database content in a histogram.}

\expl{$>$ slm ubv 2287}{display the information for NGC 2287.}

\noindent It is also possible to answer the question: "Which are the clusters
that have at the same time red giants and DDO data" with the option -int
and the indication of both datatypes.

\expl{$>$ slm gk -int ddo}{list cluster with red giants and DDO data}

\expl{$>$ slm gk -int ddo ngc}{restrict the output to NGC clusters}

\noindent One can refine the intersection by setting a condition to the
first datatype, by asking, for example that te number of red giants must
be larger than ten.

\expl{$>$ slm -gt 10 gk -int ddo}{list cluster with at least 10 red giants 
and DDO data. No selection is made on the DDO data}

\expl{$>$ slm -gt 10 gk -int ddo ngc}{restrict the output to NGC clusters}

\noindent Finally, constraints can be placed on both selections to avoid
selecting clusters with many data in one datatype and few in the second one.

\expl{$>$ slm -gt 10 gk -int ddo -gt 10}{list cluster with at least 10 red 
giants and DDO data for at least 10 stars.}


\subsection{The command {\tt ocl}}
BDA offers also the possibility of performing selections on cluster 
parameters such as distance, reddening, age, diameter and equatorial or 
galactic coordinates, on the basis of the results catalogued by 
Lyng{\aa}. The command name is {\tt ocl}. The output is sorted by increasing 
order of the selected parameters.

\expl{$>$ ocl}{list the whole file, page by page.}

\expl{$>$ ocl Berkeley}{list the Berkeley clusters}

\expl{$>$ ocl d -lt 500}{select clusters nearer than 500 pc, 
and displays a list sorted by increasing distance.}

\expl{$>$ ocl l -i 0 90}{select clusters with galactic 
coordinates between 0$^o$ and 90$^o$ degrees.}

\expl{$>$ ocl lb -i 0 30 -5 5}{select clusters according to the 
conditions: 0$^o$ $<$ l $<$ 30$^o$, -5$^o$ $<$ b $<$ +5$^o$}

\expl{$>$ ocl -t d}{list the whole catalogue, sorted by increasing distance}

\noindent The permitted options are: -gt -lt -i, arguments are from the 
following list: 

\expl{alf \ dec \ ad}{right ascension and declination, or both.}

\expl{l \ b \ lb}{galactic coordinates or both,}

\expl{z}{distance from the galactic plane,}

\expl{d}{distance in pc,}

\expl{m-M}{distance modulus,}

\expl{ebv}{colour excess E(B-V),}

\expl{t}{log of the age,}

\expl{D}{cluster apparent diameter}

\noindent Try also {\tt emploi ocl} for further explanations and examples. 

\section{Statistics}
One can get statistical information on several topics.

For each data file, one can obtain the number of stars observed 
for each reference, using the option -d:

\expl{$>$ ngc 0129}{~}

\expl{$>$ ubv -d}{detailed content}

\expl{$>$ ubv -d 10}{detailed content, sources with more than 10 stars}

\expl{$>$ ubv -dr}{detailed content and references}

\noindent One can also obtain the number of data sources per star, and 
fix some cutoff level if necessary:

\expl{$>$ ubv -nb}{~}
     
\expl{$>$ ubv -nb 3}{select only those stars which have more 
than three UBV sources.}

\noindent One can also get global information on the database content:

\expl{$>$ stcn ubv}{display the number of stars and UBV measurements.}

\expl{$>$ stcn bda}{display a complete table of the database content.}

\section{Further commands}
\subsection{The command post}
The command {\tt post} has been designed to prepare data from the database
before sending them by mail. {\tt post} takes datatypes as arguments and
produce a file called {\it f.exp} written in the directory {\sf .T}.

{\tt post lyn tab ubv mk vsn}\\
\expl{~}{prepare a file containing a header and the required data, with the
references when relevant.}

\subsection{The command moyen}
The command {\tt moyen} is used to compute mean values for the UBV data
because no avera data are kept in the database. It takes a datatype as
argument.

\expl{$>$ moyen ubv}{compute mean ubv data}

The command first displays the detailed content of the file and asks
if weights are desired. If yes it presents each reference and ask for the
weight. It computes the mean values and displays a table showing the
number of stars with 1 or more data sources.

\subsection{The command olps}
The command {\tt olps} (one line per star) is designed to reduce the 
data to one line per star by a selection on the data sources. This
command is used for example to keep one position only (datatype pos)
or to prepare the file {\it mkk.sel}.

\expl{$>$ olps mk}{~}

\noindent The program first presents the detailed content and references
and asks for a list of references to consider. They should be entered by 
order of decreasing importance. It finally asks for the output filename.


\section{Alternative commands}
Other forms of some commands are also available which in principle 
shortens the answer delay. They require an indication of the type of data 
to be processed. No options are available. They are:

\begin{center}
{\tt meas list detail detref extrf compar slct}
\end{center}

\expl{$>$ meas ubv 1 2}{display the UBV data of star 1 and 2;
(identical to {\tt ubv 1 2}).}

\expl{$>$ list doo}{list the ddo data file; (identical to {\tt ddo}).} 

\expl{$>$ detail mk}{give the number of stars per reference;
(identical to {\tt mk -d}).}

\expl{$>$ detref vsn}{list in addition the relevant references; (identical to
{\tt vsn -dr}).}

\expl{$>$ extrf ubv 1341}{select the data from reference 1341
(identical to {\tt ubv ref -eq 1341}).}

\expl{$>$ compar ubv pgh}{start the program for comparing the UBV 
pe data with the pg ones; (identical to {\tt ubv -c pgh}).}

\expl{$>$ slct ubv v -lt 7.5}{select star with V $<$ 7.5; (identical
to {\tt ubv v -lt 7.5}).}


\chapter{The menu mode}
\section{Introduction}
The menu mode is more indicated when one wants to perform an action which is 
driven by a menu. Instead of remembering all menu names, it is easier to
look at the desired option. A second menu is often presented, and even as
third one may appear.

When the end of the action, a triple choice is sometimes presented:
{\tt continue (y/n/f)?}. If the answer is {\tt yes} the same menu is
proposed again. If {\tt no} is prefered, then the previous menu is
presented again, and finally the answer {\tt fin} made you quit the
whole process. 

In a cascade of menu, as for example the sequence {\tt go}, with selection
{\tt dsc} and {\tt doc}, when entering {\tt fin} to put an end to the work with
the menu, the previous menu appears again. Thus entering {\tt fin} to the
menu "doc" will come back to the calling menu "dsc". {\tt fin} to "dsc" will
finally bring the user back to the starting point "go".

\section{The menu go}
One possibility is to start with the main menu which offers a 
simple way to interrogate the database.

\noindent To display the main menu, enter:

\expl{$>$ go}{}
\begin{figure}

\caption{Listing of the menu "go"}

\begin{verbatim}
          Welcome to the Open Cluster Data Base 

          What do you want to do today?

          Look at the documentation on BDA  ........ dsc
          Query BDA interactively .................. bda
          Query BDA with a program ................. int
          Query the bibliography ................... bib
          Query the references ..................... ref
          Query Alter's bibliography ............... bdp
          Look at the man pages .................... man
          Look at the french manual ................ mnl
          Analyse the data ......................... ana
          Investigate a cluster .................... atp
          Plot photometric diagrams ................ pho
          Work on co-ordinates ..................... trc
          Update the Data Base ..................... maj
          Come back to the UNIX shell .............. sh 
          Look at the menu system information ...... hlp
          End of interrogation ..................... fin
\end{verbatim}
\end{figure}

A number of preliminary texts describe the database and they can 
be reached by answering "dsc" to the main menu choice
\expl{$>$ dsc}{displays the general menu}

\begin{verbatim}
          Which subject?                    doc
               Which subject?                 f
               Other information (y/n/f)?     y
               Which subject?                 n
                    Which subject?            1
                    Autre information?        y
                    Which subject?            2
                    Autre information?        n
               Which subject?               end
\end{verbatim}

The command {\tt bdp} may be started from the main menu: the action 
then corresponds to the syntax {\tt bdp} and one only gets a listing of 
the file.

\expl{$>$ go}{}
\begin{verbatim}
          Express your wish:     bdp
          For which cluster:     ngc/0457 
          suite (y/n)?           y
          For which cluster?     ngc/0581
          suite (y/n)?           n
          Express your choice:   fin
\end{verbatim}

\noindent {\tt lyn} can be called from the main menu "go". In this case 
one should give the complete name of the cluster, as for example: ngc/2516 or 
anon/tr16.

The main menu "go" offers a simple interactive interrogation for {\tt ref}.


\section{Other menus}

\chapter{The prompt mode}
\section{Starting and ending}
Enter {\tt dbms} to initiate the prompt mode. The program will display the
prompt bda> with the cluster name inside the prompt. It is ready
to work. An on-line help is offered each time a new prompt is presented 
to tell you about the allowed agents or parameters. What you enter at the
bda> prompt is not really a command, although the text is similar, but
what I called an agent. It does not execute the job, but the agent comes
behind the prompt and you can enter further information to perform the
action.

{\tt dbms} offers the option -v (verbose). If it is set, a comment line
is displayed at any new prompt to inform the user about the kind of entry
that is expected. 

The agent prompt is left by {\tt bye}, you should come back to bda>. If 
there are any parameter or star number you need to enter <return> or 0.
To end with the program, enter <return> or {\tt bye} at the prompt bda>.

When an agent is waiting for a cluster name, the default value is the
current cluster name. It is presented in square brackets. To get the 
information for another cluster, enter another cluster name.

\section{On-line help}
A help is provided at every level. If you enter {\tt help} (or simply h) at
the promp bda>, it will open a new window and you get the prompt
help> [dbms], dbms being the default value. It displays information about 
the program, and suggests to continue reading the text on bda, which in
turn proposes yo to continue with agent, to discover the allowed agents.

If you for example chose {\tt meas}, your prompt is now measure>. Typing 
{\tt help} will display a list of the datatypes for which you can get data. 
After you selected a datatype, let us say {\tt mk}, the prompt becomes
measure mk>. Entering {\tt help} again will inform you that the program is 
waiting for one or several star numbers. This help facility will guide
you at every prompt during your database query session.

It may be useful to keep the help window that is opened when you enter 
{\tt help} at the prompt bda> to look at the list of agents and get
information on them. You can also close it and you'll get an icone with the
label "BDA help". When you say {\tt bye} to the prompt help>, the window
will close.



\chapter{The graphical user interface}
\section{The panel "Interrogation"}
To use it, move first to a cluster directory, and enter {\tt xbda}. On-line 
help has been prepared for most buttons. To read it, locate the mouse cursor 
on the chosen button and press $<$help$>$ on the keyboard. Basically, the 
right hand settings define the type of data you want from the database, and
the command button executes the action. A dialogue line allows you to enter
star numbers when required. You can change of cluster by entering a 
cluster number, changing also from NGC to IC or anon (abbreviated menu 
button) and clicking on the button labelled "chdir". "Next" presents
either a menu or moves you to the next panel. "Quit" terminates the
work.


\part{ADVANCED USE and APPLICATIONS}

\chapter{Data Analysis}
\section{The data comparaisons}

BDA offers a number of facilities to analyse the data. At present, 
they concern the UBV (pe, pg, CCD), H$\beta$ and Vsini data and the 
individual radial velocities. The results of the analysis already 
obtained on UBV photoelectric data may be displayed with the option 
"-y". The interactive analysis also provides a graphics facility which 
is very useful to check for trends in the parameter differences.

\expl{$>$ ngc 6322}{cluster chosen for the example}

\expl{$>$ ubv -c}{starts a fortran programme}

\expl{$>$ compar ubv}{is an alternative syntax}

\begin{verbatim}
          Enter the first reference:                       461
          Enter the second reference:                      902
          Number of stars to suppress:                       0

          Do you want to look at the resulting file (y/n)?   y
          Do you want to plot the diagrams (y/n)?            y

          Give the y parameter:                              4
          Give the x parameter:                              4

          What do you want to do?                            h
          - suite -                                   "return"
          What do you want to do?                            z
          Give the limits (l r t b):              0 1.6 .5 -.2
          What do you want to do?                            r
          Give your choice:                                  1
          - suite -                                   "return"

          What do you want to do?                            a
          Give the y parameter:                              1
          Give the x parameter:                              1
          What do you want to do?                            f
\end{verbatim}


\noindent You may start this process from the menu for data analysis:

\expl{$>$ ana}{}\\
{\tt Give your choice:}{\tt u} 

\noindent or directly from the main menu:

\expl{$>$ go}{}
\begin{verbatim}
          Express your wish:        ana
               Give your choice:    u
\end{verbatim}


\noindent In the interactive way, you can also compare any of the ubv, pgh, 
ccd or sit data by using the following syntax:

\expl{$>$ ubv -c pgh}{compare the UBV photoelectric and photographic data}

\expl{$>$ pgh -c ccd}{compare the UBV photographic and CCD data}

\noindent The programme asks for the respective references and then behaves 
as described above
\section{Comparaison of the V magnitudes}
\section{Comparaison of the colour indices}

\chapter{Photometric Diagrams}
\section{The UBV system}
If SM is you already installed, you can try the following commands.
In the command mode, photometric diagrams in the UBV and related systems
is obtained with the command {\tt diag}. By default, the file {\it ubv.cmd}
is used and the diagram is (V, B-V). The parameters may be changed with
the menu options (8 o). To plot UBV diagrams from CCD data, enter
{\tt diag ccd}. The program will ask for the reference, enter 0 if you do 
not know it. Possible types are: ubv, pgh, ccd, sit, cam, cmd, hrd.
cmd is the default type. The file has one line per star.

The main menu also offers some facilities to plot photometric diagrams 
through the option {\tt pho}. This starts a pull down menu which 
offers predefined common diagrams. The same menu can be directly called 
by {\tt diaphot} when you already are in a cluster directory. It works 
correctly, although I did not have time to prepare all the necessary 
scripts. Note that it only works with files containing mean data. These 
have not been systematically computed for the UBV system. The 
present uvby files do not contain the V magnitudes; these have to be 
included in the next version of the uvby mean catalogue. 

Thie example given below shows you the possibilities and the philosophy of 
the whole thing.

\noindent SM opens a graphics window and the program presents you  
a compact menu which allows you to perform various actions on the 
graphics:

\begin{verbatim}
       Box, Dimension, Title ................. 1
       Modulus, Reddening .................... 2
       Isochrones ............................ 3
       Double Stars .......................... 4
       Membership ............................ 5
       Identifications ....................... 6
       Selections V or x,y ................... 7
       Hardcopy, Other, End .................. 8
\end{verbatim}

\noindent After you choose one of the possibilities, a second menu be 
presented, corresponding to the topics selected. For example, for 1:

\begin{verbatim}
       Change of the limits ................... z
       Change of the box size ................. t
       Definition of the size in cm ........... T
       Change of the expand factor ............ x
       Definition of a title .................. l
       Print of the cluster parameters ........ L
\end{verbatim}

If necessary the program will ask you for the values of the parameters 
you want to change.

\noindent If you do forget the reference number, you can also use the 
following syntax:

\expl{$>$ diag ubv}{without argument}

\begin{verbatim}
          Which reference?          h

                              produces an output equivalent to      
                              "ubv -d" or "detail"

          Which reference?        324

          Which parameter en y?     v

          Which parameter en x?   b-v
\end{verbatim}


\section{The Geneva system}
\section{The uvby system}

\chapter{Calibrations}
\section{The Geneva system}
\section{The uvby$\beta$ system}
\section{The DDO system}
\section{The Washington system}

\chapter{Radial Velocity}
\section{The menu avr}
\section{Spectroscopic binaries}

\chapter{Integrated Colours}

\chapter{Star Membership}
\section{The Expert system}

\part{DEVELOPMENT and MAINTENANCE}

\chapter{The Data}
\section{The menu maj}
Updating the data is rather easy through the command {\tt maj} which 
presents a pull-down menu. After you entered {\tt maj} the program
requests the kind of data you are updating. The first menu concerns 
the references. You can add a new reference at the end of the reference 
file or edit it; you are then in "vi".

\noindent The second menu offers four possibilities:

\begin{enumerate}
\item add new data in the file which is then automatically sorted;
\item add a supplementary file called "f.sup";
\item create a new file of the data type;
\item edit the data file.
\end{enumerate}

The third menu handles the files in the {\sf bda/contenu} directory. 
The first item automatically updates these files after modification 
for the NGC and IC clusters only. The anon clusters have to be updated 
manually by using the third item "editer". You can also create a new 
file.

The fourth menu handles the statistics. The first item produces 
counts of the number of lines and of the different stars for the file 
corresponding to the data type. The second item gives the total number 
of clusters, measurements and stars in BDA for this kind of data. The 
third one allows the user to edit and update the BDA summary file.

The last menu offers some facilities to handle the file corresponding 
to the data type. In particular the item "zeroter" (a neologism) 
adds "0" before the star number in the first column. You can also 
compress a file, sort it, display the first 15 lines or edit it.

The figures 2, 8 move the reverse video band and 4, 6 change the 
menu displayed. 1 changes the cluster, and 7 changes the data type. 
0 executs the task.

\chapter{The Coordinates}
\section{The menu trc}

\chapter{Rectangular Positions}
\section{The menu xy}

\chapter{The Cross-References}
\section{The menu ttr}


\part{APPENDIX}

\appendix

\chapter{Command Names}

\section{Shell commands}

\setlongtables

\begin{longtable}{ll}
\caption{Command Names} \\
\hline
 Commands     &      Objects   \\
\hline
\endfirsthead
\hline
 Commands      &      Objects   \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 & \\
\multicolumn{2}{l}{{\bf Commands on photometric data}} \\
 & \\
    ubv     &    UBV photoelectric data \\
    pgh     &    UBV photographic data \\
    ccd     &    UBV CCD data \\
    sit     &    UBV data from video cameras (SIT) \\
    cmd     &    UBV data from the file {\it ubv.cmd} \\
    hrd     &    UBV data with individual dereddening \\
    rgu     &    RGU photographic photometry \\
    uvby    &    uvby measurements \\
    uvbym   &    uvby mean values \\
    egg     &    uvby data from Eggen \\
    ccdy    &    uvby CCD data \\
    beta    &    H$\beta$ measurements \\
    betam   &    H$\beta$ mean values \\
    gen     &    Catalogue of magnitudes in the Geneva system \\
    mpg     &    Photographic and photovisual data \\
    prm     &    Geneva parameters \\
    ric     &    RI (Cousins) data \\
    rie     &    RI (Eggen) data \\
    rij     &    RI (Johnson) data \\
    rik     &    RI (Kron) data \\
    jhk     &    JHK measurements \\
    ddo     &    DDO measurements \\
    wal     &    Walraven measurements \\
    cmt     &    Washington measurements \\
    vil     &    Vilnius measurements \\
    smi     &    Measurements in the system of Lindsey Smith \\
  & \\
\multicolumn{2}{l}{Commands on spectroscopic data} \\
 & \\
    mk      &     MK spectral types \\
    mks     &     Selected MK types \\
    spt     &     One dimensional spectral types (HD format) \\
    vsn     &     projected rotational velocities Vsini \\
    irv     &     Individual radial velocities \\
    mrv     &     Mean radial velocities \\
    gpo     &     Objective prism radial velocities \\
    rfs     &     Radial velocities from Geyer et Nelles \\
    orb     &     Orbital elements of spectroscopic binaries \\
 & \\
\multicolumn{2}{l}{Commands on identifications, positions and proper motions} \\
 & \\
    idm      &     Identifications BS, HD, DM, GCVS, ... \\
    idm1     &     Identifications BS, HD, DM, NLS, LSS, GCVS \\
    idm2     &     Identifications IDS, ADS, SAO, misc. \\
    ids      &     Identifications of components in the IDS catalogue \\
    tab      &     Cross-reference tables \\
    coo      &     Full coordinates \\
    pos      &     Rounded off coordinates \\
    hpd      &     Positions (x,y) from the file {\it hipad.xy} \\
    apm      &     Absolute proper motions \\
    rpm      &     Relative proper motions \\
 & \\
\multicolumn{2}{l}{Miscellanous commands} \\
 & \\
    aud    &      Provide other types of data for a sample \\
    prob   &      Membership probabilities \\
    rem    &      Remarks \\
    ref    &      References \\
    bib    &      Bibliography (A.A.A) \\
    bdp    &      Bibliography of Alter et al. and Ruprecht et al. \\
    lyn    &      Lyng{\aa}'s catalogue \\
    gk     &      Red giants in cluster fields \\
    multi  &      Multiple types of data \\
    sysno  &      Give the adopted reference for each cluster \\
 & \\
\multicolumn{2}{l}{Selection of clusters} \\
 & \\
    slm    &     Display the number of available data per cluster \\
           &     for a given type of data \\

    ocl    &     Select clusters according to their parameters:  \\    
           &     distance, reddening, age and position. \\

    stcn    &    Display the total number of measurements and  \\
            &    stars for a given type of data  \\
 & \\
\multicolumn{2}{l}{Menus} \\
 & \\
    go     &   General menu for interrogating BDA \\
 & \\
    dsc    &      Choice of the descriptions of BDA \\
    doc    &      Documentation on BDA \\
    apl    &      Description of coded applications \\
    dia    &      Description of graphical capabilities \\
    ans    &      Description of open cluster analysis \\
    anl    &      Description of data analysis \\
    msj    &      Description of up-dating processes \\
    ncm    &      Names and objects of commands \\
 & \\
    atp    &      Astrophysical study of clusters \\
    ana    &      Analysis of the data \\
    avr    &      Handling of radial velocities \\
    cbr    &      Calibrations \\
    sbs    &      Spectrocpic binaries \\
    trc    &      Handling the coordinates \\
    ttr    &      Cross-references \\
    xy     &      Working with the x,y positions \\
    maj    &      Update of the database \\
    diaphot &     Plot photometric diagram \\
    testenv &     Test the setting of BDA environment variables \\
 & \\
\multicolumn{2}{l}{Commands on compressed files} \\
 & \\
    scl    &      Display a scale and 10 lines of a file \\
    zex    &      Cat a compressed file \\
    zp     &      List a compressed file page after page \\
    zhd    &      List the first 15 lines of a compressed file \\
    ztl    &      List the last 15 lines of a compressed file \\
    zpr    &      Print a compressed file with {\tt lpr} \\
    vz     &      Edit a compressed file with {\tt vi} \\
    wz     &      Count the number of lines of a compressed file \\
 & \\
\multicolumn{2}{l}{Help commands} \\
 & \\
  emploi   &      Describe the capabilities of commands \\
  man      &      Display the command manual pages \\
  mnl      &      Display the command manual pages (French) \\
  doc      &      Display the menu of BDA descriptions \\
  xhelp    &      Hypertext command description \\
 & \\
  dtype     &     Display the data type summary \\
  ncm       &     Access to descriptions of the command names \\
  option    &     Display the option description \\
  nfile     &     Display the filename related to a datatype \\
  fichier   &     Display the file content description  \\
  field     &     Display the field names of a datatype \\
  form      &     Display the format of the datatype specified \\
 & \\
\end{longtable}

\newpage
\setlongtables

\begin{longtable}{ll}
\caption{Further commands} \\
\hline
 Name & Action \\
\hline
\endfirsthead
\hline
 Name & Action \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 carte  &  Plot a cluster chart from (x,y) data \\
 cmpv   &  Compare V magnitudes \\
 cmpbv  &  Compare colour indices \\
 compar &  Compare UBV photometric data \\
 dbms   &  prompt mode query program \\
 diag   &  Plot photometric diagrams for UBV data \\
 diagen &  Plot photometric diagrams in the Geneva system \\
 detail &  Display the detailed file content \\
 detref &  Display the content and references \\
 ecasy  &  Display the difference in the UBV data \\
 extrf  &  Select data from the source reference \\
 list   &  List a file \\
 meas   &  Extract data for a list of stars \\
 moyen  &  compute average for UBV data \\
 nbmeas &  Number of entries for a star \\
 nbsrc  &  Number of data source per stars \\
 olps   &  Keep one line per star \\
 post   &  Prepare data for sending by E-mail \\
 slct   &  Make a selection \\
 total  &  Total number of measurements and stars \\
 xhelp  &  Hypertext help with NCSA Mosaic \\
 ximage &  Display the scanned maps \\
 xprimage & Print the scanned maps \\
\end{longtable}

\newpage
\section{Program names}

\setlongtables

\begin{longtable}{ll}
\caption{Program names} \\
\hline
 Name & Action \\
\hline
\endfirsthead
\hline
 Name & Action \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
\multicolumn{2}{l}{Fortran programs} \\
  &  \\
 ana\_ddo   &  calibration in the DDO system \\
 astrom     &  transformation from (x,y) in " of arc to RA Dec \\
 beta\_eca  &  comparaison of H$\beta$ indices \\
 bibg       &  bibliographic search \\
 cbr\_uvby  &  calibration for uvby \\
 ccd\_eca   &  comparaison of CCD data, a link to ubv\_eca \\
 chgtno     &  change of numbering system \\
 cintf      &  compute integrated colours and magnitudes \\
 cogxy      &  transformation from (x,y) in arbitrary units to RA Dec \\
 cordn1     &  transformation from the type coo to pos \\
 corresp    &  search for cross-identifications \\
 decon\_bv  &  perform a photometric deconvolution of a binary \\
 degrah     &  transformation from RA in degrees to RA in hours \\
 dered      &  code to deredden UBV colours \\
 efem       &  ephemeris for spectroscopic binaries \\
 gould      &  transformation from (x,y) in minute of time and degrees \\ 
 hyad       &  compute the distance of Hyades stars from proper motions \\
 lucke      &  determine an orbit for a spectroscopic binary \\
 magV\_eca  &  comparaison of V magnitudes \\
 moy\_ubv   &  compute mean values for UBV data \\
 moy\_vr    &  compute mean radial velocites \\   
 prepost    &  compute precession (B system) \\
 prepxy     &  prepare the (x,y) file for plotting cluster chart \\
 retab      &  use of cross-reference tables \\
 rgl        &  compute regressions \\
 rgu\_ubv   &  transform RGU into UBV \\
 sref       &  interactive query of the references \\
 sysxy      &  transformation of a (x,y) system into another one \\
 ubv\_eca   &  comparaison of two sources of UBV data \\
 ubv\_moy   &  compute mean values for UBV data \\
 wal-john   &  compute Johnson's V and B-V from Walraven V and V-B \\
 xystd      &  compute (x,y) from RA and Dec \\
 & \\
\multicolumn{2}{l}{C programs} \\
 & \\
 ajzero     &  format coordinates \\
 b2v1\_bvj  &  transform B2-V1 into B-V Johnson \\
 cinte      &  compute integrated colours \\
 diaphot    &  menu (with curses) for photometric diagrams \\
 fparam     &  compute Geneva parameters \\
 maj        &  menu (with curses) for updating the database \\
 notab      &  preparation of a new cross-reference table \\
 param      &  compute Geneva parameters \\
 sysexp     &  expert system for membership determination \\
 test\_env  &  program to test the settings of environment variables \\
 & \\
\multicolumn{2}{l}{C programs using SM libraries} \\
 & \\
 diag       &  plot diagrams in the UBV system \\
 diagen     &  plot diagrams in the Geneva system \\
 Histo      &  make histrograms \\
 plt\_eca.ubv &  plot the results of UBV data comparaisons \\
 splt\_irv  &  plot the spectroscopic binary orbits \\
 & \\
\multicolumn{2}{l}{User interface} \\
 & \\
 xbda     & user interface \\
\end{longtable}


\chapter{Option Names}

\setlongtables

\begin{longtable}{lll}
\caption{Option Description} \\
\hline
 Option & Meaning & Action \\
\hline
\endfirsthead
\hline
 Option & Meaning & Action \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot

\multicolumn{3}{l}{Data query} \\
 & \\
-m  stars      &  multiple   & : several star numbers at the same time \\
-r  star       &  reference  & : give the reference as well as the data\\
-q  id         &  quelconque & : query with any identification\\
-a  star       &  average    & : compute also the average value (ubv) \\
-u  ref star   &  uniq       & : select "star" in the reference "ref"\\
 &  & \\
\multicolumn{3}{l}{Forming samples} \\
 &  & \\
-gt   lim     &  greater than  & : stars with parameter $>$ lim \\
-lt   lim     &  less than     & : stars with parameter $<$ lim\\
-i  lim1 lim2 &  interval      & : stars with lim1 $<$ parameter $<$ lim2\\
-eq  ref      &  equal to      & : data from the reference "ref"\\
-ne  ref      &  non equal     & : data not belonging to "ref"\\
  &  & \\
-k  string    &                & : stars corresponding to "string"\\
 &  & \\
\multicolumn{3}{l}{Information on files} \\
 &  & \\ 
-d  [lim]  &    detail  &  : count the number of stars per reference\\
-dr [lim]  &    detail/ref  & : same as above, but adds the list of ref.\\
-nb [lim]  &    number      &  : number of measures per star [$\geq$ lim]\\
-t         &    total       &  : count the number of measures and stars\\
 & & \\
\multicolumn{3}{l}{Help} \\
 & & \\
-h            &      help    &   : display the usage description \\
 & & \\
\multicolumn{3}{l}{Applications} \\
 & & \\
-c    [type]  &  comparaison  & : run the comparaison program\\
-dn   [lim]   &  detail night & : give the number of stars per night\\
-v    star    &  voir         & : select the measures and plots them\\
-bin  star    &  binary       & : start the code for orbit computing\\
-bin2 star    &  binary       & : start the code for orbit computing\\
-sb1  star    &  SB1          & : plot the orbit of a SB1 binary \\
-sb2  star    &  SB2          & : plot the orbit of a SB1 binary \\
-sort param   &  sort         & : sort the output according to "param" \\
 & & \\
\multicolumn{3}{l}{Results} \\
 & & \\
-y            &             & : display the results of the analysis\\
-histo        &  histo      & : plot a histogram of BDA content \\
-info         &  information & : technical information \\
 & & \\
\multicolumn{3}{l}{Use of the files} \\
 & & \\
-f filename   & file        &  : use the file given\\
-s            &  sortie     & : use the file sortie.out\\
-l            & liste       & : use the file liste.noet\\
\end{longtable}

\chapter{Data types}

\setlongtables

\begin{longtable}{ll}
\caption{Data types} \\
\hline
 Type & Data \\
\hline
\endfirsthead
\hline
 Type & Data \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 apm     & :  absolute proper motion \\
 beta    & :  measurements in the H$\beta$ system \\
 betam   & :  mean H$\beta$ data \\
 ccd     & :  CCD measurements in the UBV system \\
 ccdric  & :  CCD measurements in Cousins's RI system \\
 ccdy    & :  measurements CCD in the uvby system \\
 cmd     & :  mean values UBV for plotting the HR diagram \\
 cmt     & :  measurements in the system of Washington \\
 coo     & :  coordinates (1950) from astrometric sources \\
 ddo     & :  meaurements in the DDO system \\
 egg     & :  measurements uvby in Eggen's modified system \\
 gen     & :  mean values from the Geneva catalogue \\
 gk      & :  list of red giants in the cluster field \\
 gpo     & :  objective prism radial velocities (GPO) \\
 hpd     & :  rectangular x,y positions \\
 idm1    & :  identifications (IDAM), first part \\
 idm2    & :  identifications (IDAM), second part \\
 ids     & :  identifications of double star components (IDS) \\
 irv     & :  individual radial velocities \\
 jhk     & :  measurements in the JHK photometric system \\
 mk      & :  MK spectral types \\
 mks     & :  selected MK types \\
 mpg     & :  measurements in the m$_{\mbox{pv}}$, m$_{\mbox{pg}}$ system \\
 mrv     & :  mean radial velocities from the literature \\
 orb     & :  orbital elements of spectroscopic binaries \\
 pgh     & :  measurements UBV photographic \\
 pghm    & :  mean values UBV photographic \\
 pos     & :  rounded off coordinates (1950) \\
 prm     & :  Geneva parameters \\
 prob    & :  membership probabilities (proper motion) \\
 probr   & :  membership probabilities (radial velocity) \\
 rem     & :  remarks \\
 rgu     & :  measurements in the RGU photographic system \\
 ric     & :  measurements in Cousins's RI system \\
 rie     & :  measurements by Eggen in Kron's RI system \\
 rij     & :  measurements in Johnson's RI system \\
 rik     & :  measurements in Kron's RI system \\
 rpm     & :  relative proper motion \\
 sit     & :  measurements UBV from video (SIT) cameras \\
 spt     & :  unidimensional spectral types (HD format) \\
 tref    & :  references for cross-reference tables \\
 tab     & :  cross-reference tables \\
 ubv     & :  measurements UBV photoelectric \\
 ubvm    & :  mean value UBV photoelectric \\
 uvby    & :  measurements uvby \\
 uvbym   & :  mean uvby values \\
 vic     & :  VI measurements in Cousins's system \\
 vie     & :  VI measurements in Kron's system \\
 vil     & :  measurements in the Vilnius system \\
 vsn     & :  projected rotational velocities \\
 wal     & :  measurements in the Walraven system \\
\end{longtable}

\chapter{Files Names}

\setlongtables

\begin{longtable}{ll}
\caption{Data file names and content} \\
\hline
 Name & Content \\
\hline
\endfirsthead
\hline
 Name & Content \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 adel.and   & :  coordinates (1950) from Andersen and Reiz \\
 adel.coo   & :  coordinates (1950) from astrometric sources \\
 adel.pos   & :  rounded off coordinates (1950) \\
 baas.dat   & :  information on ongoing work \\
 beta.mes   & :  measurements in the H$\beta$ system \\
 beta.moy   & :  mean H$\beta$ data \\
 bdp.cat    & :  Budapest bibliography \\
 cmt.mes    & :  measurements in the system of Washington \\
 ddo.mes    & :  meaurements in the DDO system \\
 elem.orb   & :  orbital elements of spectroscopic binaries \\
 gen.cat    & :  mean values from the Geneva catalogue \\
 gen.prm    & :  Geneva parameters \\
 gK         & :  list of red giants in the cluster field \\
 hipad.xy   & :  rectangular x,y positions \\
 idam.id1   & :  identifications (IDAM), first part \\
 idam.id2   & :  identifications (IDAM), second part \\
 idam.ids   & :  identifications of double star components (IDS) \\
 jhk.mes    & :  measurements in the JHK photometric system \\
 lyn.dat    & :  information from  Lyng{\aa}'s 5th catalogue \\
 mem.fin    & :  membership estimates according to mu,vr,phot \\
 mkk.don    & :  MK spectral types \\
 mkk.sel    & :  selected MK types \\
 mpg.mes    & :  measurements in the m$_{\mbox{pv}}$, m$_{\mbox{pg}}$ system \\
 prob.mu    & :  membership probabilities (proper motion) \\
 prob.vr    & :  membership probabilities (radial velocity) \\
 rem.txt    & :  remarks \\
 pm.abs     & :  absolute proper motions \\
 pm.rel     & :  relative proper motions \\
 ric.ccd    & :  CCD measurements in Cousins's RI system \\
 ric.mes    & :  measurements in Cousins's RI system \\
 rie.mes    & :  measurements by Eggen in Kron's RI system \\
 rij.mes    & :  measurements in Johnson's RI system \\
 rik.mes    & :  measurements in Kron's RI system \\
 rgu.mes    & :  measurements in the RGU photographic system \\
 spt.don    & :  unidimensional spectral types (HD format) \\
 trans.ref  & :  references for cross-reference tables \\
 trans.tab  & :  cross-reference tables \\
 ubv.ccd    & :  CCD measurements in the UBV system \\
 ubv.cmd    & :  mean values UBV for plotting the HR diagram \\
 ubv.eca    & :  differences between two sources of UBV data \\
 ubv.sit    & :  measurements UBV from video (SIT) cameras \\
 ubv.peg    & :  merge of the corrected pe and pg UBV data \\
 ubv.peo    & :  measurements UBV photoelectric \\
 ubv.pem    & :  mean value UBV photoelectric \\
 ubv.pgo    & :  measurements UBV photographic \\
 ubv.pgm    & :  mean values UBV photographic \\
 uvby.ccd   & :  measurements CCD in the uvby system \\
 uvby.egg   & :  measurements uvby in Eggen's modified system \\
 uvby.mes   & :  measurements uvby \\
 uvby.moy   & :  mean uvby values \\
 vic.mes    & :  VI measurements in Cousins's system \\
 vie.mes    & :  VI measurements in Kron's system \\
 vil.mes    & :  measurements in the Vilnius system \\
 vrad.gpo   & :  objective prism radial velocities (GPO) \\
 vrad.irv   & :  individual radial velocities \\
 vrad.mrv   & :  mean radial velocities from the literature \\
 vsini.don  & :  projected rotational velocities \\
 vsini.moy  & :  mean or selected rotational velocities \\
 wal.mes    & :  measurements in the Walraven system \\
\end{longtable}

\chapter{Data file fields}

\setlongtables

\begin{longtable}{lll}
\caption{Data file fields} \\
\hline
 Datatype & Filename & Fields \\
\hline
\endfirsthead
\hline
 Datatype & Filename & Fields \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
apm    & pm.abs   & no ref mux muy ex ey \\
beta   & beta.mes & no ref hb \\
betam  & beta.moy & no ns hb \\
ccd    & ubv.ccd  & no ref v b-v u-b \\
ccdric & ric.ccd  & no ref v v-r r-i \\
ccdy   & uvby.ccd & no ref v b-y m1 c1 \\
cmd    & ubv.cmd  & no ns v b-v u-b \\
cmt    & cmt.mes  & no ref t1 c-m t1-t2 m-t2 \\
coo    & adel.coo & no ref ah am as dd dm ds \\
ddo    & ddo.mes  & no ref c4548 c4245 c4142 \\
egg    & uvby.egg & no ref v b-y M1 C1 nm \\
gen    & gen.cat  & no q sv v p sc u v b1 b2 v1 g \\
gk     & gK       & no p M V P vr ph cl v b-v ts \\
hpd    & hipad.xy & no v x y \\
hrd    & ubv.hrd  & no ref v b-v u-b ebv \\
irv    & vrad.irv & no ref jd cp rv nl me disp tech \\
idm1   & idam.id1 & no bs hd dm nls lss gcvs \\
idm2   & idam.id2 & no ids ads sao misc \\
ids    & idam.ids & no ids ads mult theta sep ma mb \\
jhk    & jhk.mes  & no ref k j-k h-k \\
mk     & mkk.don  & no ref ts cl \\
mks    & mkk.sel  & no ref ts cl \\
mrv    & vrad.mrv & no ref rv sig \\
ocl    & ocl.dat  & cat no ah am dd dm l b d m-M ebv t ts z D \\
orb    & elem.orb & no ref p t0 e omg v0 k1 k2 omc \\
pgh    & ubv.pgo  & no ref v b-v u-b \\
pghm   & ubv.pgm  & no ns v b-v u-b \\
pos    & adel.pos & no ref ah am as dd dm \\
prob   & prob.mu  & no ref p \\
prm    & gen.prm  & no v u-b b-v u-b1 b2-v1 d delta m2 g \\
rem    & rem.txt  & no text \\
rgu    & rgu.mes  & no ref g u-g g-r \\
ric    & ric.mes  & no ref v v-r r-i nm \\
rie    & rie.mes  & no ref v v-r r-i nm \\
rij    & rij.mes  & no ref v v-r r-i nm \\
rik    & rik.mes  & no ref v v-r r-i nm \\
rpm    & pm.rel   & no ref mux muy ex ey \\
smi    & smi.mes  & no ref v b-v u-b nm \\
spt    & spt.don  & no ref ts \\
ubv    & ubv.peo  & no ref v b-v u-b nm \\
ubvm   & ubv.pem  & no ns v b-v u-b \\
uvby   & uvby.mes & no ref v b-y m1 c1 nm \\
uvbym  & uvby.moy & no ns v b-y m1 c1 \\
vic    & vic.mes  & no ref v v-i \\
vik    & vik.mes  & no ref v v-i \\
vil    & vil.mes  & no ref v u-p p-x x-y y-z z-v v-s \\
vsn    & vsini.don & no ref vsn \\
wal    & wal.mes  & no ref vj v v-b b-u u-w b-l \\
\end{longtable}

\chapter{SM macro names}


\setlongtables

\begin{longtable}{ll}
\caption{SM macro names} \\
\hline
 Name & Object \\
\hline
\endfirsthead
\hline
 Name & Object \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
 carte 2    &  plot a (x,y) map \\
 chk 1      &  check the cross-identifications \\
 diag       &  plot for a user defined file \\
 hsg        &  plot a histogram from a user defined file \\
 plt\_Dddo 2 &  plot the DDO C(45-48) vs C(42-45) diagram \\
 plt\_eca.BV  3  &  plot the colour-index comparaisons \\
 plt\_eca.magV  1 & plot the differences on V magnitudes \\
 plt\_nrv 3  &  plot the diagram Vrad vs JDs \\
 plt\_uvby 2 &  plot the $\beta$ vs c$_o$ diagram \\
 plt\_vrv 3  &  plot the diagram Vrad vs time for one JD\\
 sundim 4   &  define the plot dimension in [cm] \\
 ttr        &  check the cross-identifications \\
 vvv 1      &  comparaison of V magnitudes \\
\end{longtable}

\chapter{Cluster names}

\setlongtables 

\begin{longtable}{llll}
\caption{SM macro names} \\
\hline 
 Name & Acronym & Name & Acronym \\
\hline   
\endfirsthead 
\hline   
 Name & Acronym & Name & Acronym \\
\hline 
\endhead 
\hline 
\endfoot 
\hline 
\endlastfoot 
Basel          &  bas  & Berkeley       &  be \\
Biurakan       &  biu  & Collinder      &  cr \\
Blanco         &  bl   & Melotte        &  mel \\ 
Bochum         &  bo   & Ruprecht       &  rup \\
Czernik        &  cz   & Bergh-Hagen    &  vdbh \\
Dolidze        &  do   & & \\
Feinstein      &  fei  & & \\
Haffner        &  haf  & & \\
Harvard        &  ha  & & \\ 
Havlen-Moffat  &  hm  & & \\
Hogg           &  ho  & & \\
King           &  ki  & & \\
Lynga          &  Ly  & & \\
Markarian      &  ma  & & \\
Pismis         &  pis  & & \\
Roslund        &  ros  & & \\
Sher           &  sh  & & \\ 
Stephenson     &  ste & &  \\
Stock          &  st  & & \\ 
Tombaugh       &  to  & & \\
Trumpler       &  tr  & &  \\
Turner         &  tu  & & \\\
Upgren         &  up  & & \\
van den Bergh  &  vdb & & \\
Waterloo       &  wat & & \\
Westerlund     &  wes & & \\
\end{longtable}

\chapter{Alias}

\setlongtables

\begin{longtable}{ll}
\caption{Useful aliases} \\
\hline
 Alias & Action \\
\hline
\endfirsthead
\hline
 Alias & Action \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
alias hyades    &  cd \$BDA/anon/mel025 \\
alias pleiades  &  cd \$BDA/anon/mel022 \\
alias coma      &  cd \$BDA/anon/mel111 \\
alias alpha     &  cd \$BDA/anon/mel020 \\
alias orion     &  cd \$BDA/ngc/1976 \\
alias praesepe  &  cd \$BDA/ngc/2632 \\
alias m44       &  cd \$BDA/ngc/2632 \\
alias m67       &  cd \$BDA/ngc/2682 \\
alias m11       &  cd \$BDA/ngc/6705 \\
\end{longtable}

\chapter{Catalogues}

\section{The Catalogue of Red Giant in open clusters}
The catalogue of red giants in the field of open clusters collects the
red stars candidates for cluster membership from photometric studies.
It results from the work made to prepare the systematic observations
undertaken with the Coravel scanners. Each file, called {\it gK} is
located in the clusetr directory and summarizes the available data
for each star. The information has been coded and the keys to the codes
is described below. Further information is provided by the hypertext
description. ({\tt xhelp gk}).

Each record of the files {\it gK} gives:  

\begin{itemize}
\item the star identification,
\item the membership probability from proper motion
\item the estimated membership from proper motion (M), 
      radial velocity (V) and photometry (P).
\item the code for photometric system in which the stars 
      have been observed:

\begin{enumerate}
\item  UBV photoelectric
\item  UBV photographic
\item  Geneva system
\item  DDO system
\item  Washington system
\item  RI
\item  Eggen's uvby
\item  UBViyz
\item  Wing's system
\end{enumerate}

\item a classification type from the evolutionary state:

\begin{itemize}
\item A:  asymptotic
\item B:  branch (ascending)
\item C:  clump
\item S:  supergiant
\item F:  field (non-member)
\item D:  composite (double)
\item ~B: binary (SB)
\end{itemize}

\item the V magnitude (pe or pg, acoording to the UBV code)
\item the B-V index
\item the spectral type
\end{itemize}
\end{document}
