Q1: Discuss in
detail Table Management Techniques.
Ans: An
Assembler uses the following tables:
OPTAB:
Operation Code Table Contains mnemonic operation code and its
machine language
equivalent.
SYMTAB:
Symbol Table maintains symbolic label, operand and their correspondingmachine.
LITTAB:
is a table of literals used in the program For efficiency
reasons SYMTAB must remain in main memory throughout passes I and II of the
assembler. LITTAB is not accessed as frequently as SYMTAB, however it may be
accessed sufficiently frequently to justify its presence in the memory. If memory is at a
premium, only a part of LITTAB can be kept in memory. OPTAB should be in
memory during pass I
Q2: What is
parsing? Write down the drawback of top down parsing of backtracking?
Ans: Parsing
is the process of analyzing a text, made of a sequence of tokens,
to determine its
grammatical structure with respect to a given formal grammar. Parsing is also known
as syntactic analysis and parser is used for analyzing a text. The task of parser
is essentially to determine if and how the input can be derived from the start symbol
of the grammar. The input is a valid input with respect to a given formal grammar if it
can be derived from the start symbol of the grammar.
Following are
drawbacks of top down parsing of backtracking:
(i) Semantic
actions cannot be performed while making a prediction. The actionsmust be
delayed until the prediction is known to be a part of a successful parse.
(ii) Precise
error reporting is not possible. A mismatch merely triggers backtracking.
A source string is known to be erroneous only after all predictions have failed.
Q3: Write down
different system calls for performing different kinds of tasks.
Ans: A system
call is a request made by any program to the
operating system for performing
tasks -- picked from a predefined set -- which the said program does not have required
permissions to execute in its own flow of execution. System calls provide the
interface between a process and the operating system. Most operations interacting
with the system require permissions not available to a user level process, e.g. I/O
performed with a device present on the system or any form of communication with other
processes requires the use of system calls.
The main types
of system calls are as follows:
• Process
Control: These types of system calls are used to
control the processes. Some examples
are end, abort, load, execute, create process, terminate process etc.
• File
Management: These types of system calls are used to
manage files. Some examples are
Create file, delete file, open, close, read, write etc.
• Device
Management: These types of system calls are used to
manage devices. Some examples
are Request device, release device, read, write, get device attributes etc.
Q4: Differentiate
between pre-emptive and non-pre-emptive scheduling?
Ans: In a pre-emptive
scheduling approach, CPU can be taken away from a
process if there is a
need while in a non-pre-emptive approach if once a process has been given the CPU,
the CPU cannot be taken away from that process, unless the process
completes or leaves the CPU for performing an Input Output.Pre-emptive
scheduling is more useful in high priority process which requires immediate
response, for example in real time system. While in non pre-emptive scheduling systems,
jobs are made to wait by longer jobs, but treatment of all
processes is fairer.
Q5: What is a
semaphore? Explain busy waiting semaphores.
Ans: A semaphore
is a protected variable or abstract data type which constitutes
the classic method
for restricting access to shared resources such as shared memory in a parallel
programming environment.
Weak,
Busy-wait Semaphores: The
simplest way to implement semaphores.
Useful
when critical sections last for a short time, or we have lots of CPUs.
S
initialized to positive value (to allow someone in at the beginning).
S
is an integer variable that, apart from initialization, can only be accessed
through 2 atomic
and mutually exclusive operations:
wait(s):
while
(s.value != 0);
s.value--;
signal(s):
s.value++;
All happens
atomically i.e. wrap pre and post protocols.
Q6: What are
the four necessary conditions of deadlock prevention?
Ans: Four necessary conditions for deadlock prevention:
1. Removing
the mutual exclusion condition means that no process may have exclusive
access to a resource. This proves impossible for resources that cannot be spooled, and
even with spooled resources deadlock could still occur. Algorithms that avoid mutual
exclusion are called non-blocking synchronization algorithms.
2. The
"hold and wait" conditions may be removed by requiring processes to
request all the
resources they will need before starting up. Another way is to require
processes to release all
their resources before requesting all the resources they will need.
3. A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency
3. A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency
control.
4. The
circular wait condition: Algorithms that avoid circular waits include
"disable interrupts
during critical sections", and "use a hierarchy to determine a
partial ordering of
resources" and Dijkstra's solution.
Q7: What is fork- join?
Ans: Fork-join are primitives in a higher level programming language for implementing interacting processes. The syntax is as follows:
Ans: Fork-join are primitives in a higher level programming language for implementing interacting processes. The syntax is as follows:
fork
<label>;
join
<var>;
where
<label> is a label associated with some program statement, and
<var> is a variable. A
statement fork label1causes creation of new process that starts executing the statement with the label label1. This process is concurrent with the process which
executed the statement fork label1.A join statement synchronizes the birth of a
process with the termination of one or more processes. Fork-Join
provide a functionally complete facility for control synchronization.
Q8: List and
explain the three events concerning resource allocation. Define the following:
(i) Deadlock.
(ii) Resource
request and allocation graph (RRAG)
(iii)Wait for
graph (WFG)
Ans: (i) Deadlock: Each process in a set of processes is
waiting for an event that only process in
the set can cause.Deadlocks can be described by directed bipartite graph called a Resource request and allocation graph (RRAG)
(ii) Request-Allocation
graph (RRAG): A graph G = (V,E) is called bipartite if V can be
decomposed into two disjoint sets V1 and V2 such that every edge in E joins a vertex
in V1 to a vertex in V2.Let V1 be a set of processes and V2 be a set of resources.
Since the graph is directed we will consider:
an
edge (Rj,Pi) (an assignment edge) to mean that resource Rj has been allocated to
process Pi an
edge (Pi,Rj) (called a request edge) to mean that process Pi has requested resource Rj
(iii) Wait-for graph:
1. Use a resource allocation graph to derive a wait-for graph.
1. Use a resource allocation graph to derive a wait-for graph.
2. Wait-for
graph obtained by making an edge from p1 to p2 if p1 is waiting for a resource that
is allocated to p2.
3. Deadlock
exists if a cycle exists in resulting wait-for graph.
Q9: What is a
race condition? Explain how does a critical section avoid this condition. What are the
properties which a data item should possess to implement a critical section?
Ans: Race condition is situation where several processes
access and manipulate shared data
concurrently. The final value of the shared data depends upon which process
finishes last. To prevent race conditions, concurrent processes must be synchronized. Data
consistency requires that only one processes should update the value of a data item at any
time. This is ensured through the notion of a critical section. A critical
section for
data item d is a section of code, which cannot be executed concurrently with itself or
with other critical sections for d. Consider a system of n processes (P0, P1,…,
P n-1), each process has a segment of code called
a critical section, in which the proceses
may be changing common variables, updating a table, writing a file, and so on. The
important feature of the system is that, when one process is executing in its
critical
section, no other process is to be allowed to execute in its critical section. Thus the
execution of critical sections by the processes is mutually exclusive in time.
Solution
to the Critical Section Problem must satisfy the following
three conditions:
1. Mutual
Exclusion. If process Pi
is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress.
If no process is executing in its critical section and there exist some processes that
wish to enter their critical section, then the selection of the processes that will
enter the critical section next cannot be postponed indefinitely.
3. Bounded
Waiting. A bound must exist on the number of times
that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
—Assume that
each process executes at a nonzero speed
—No assumption
concerning relative speed of the n processes.a solution to
the Dining philosopher problem so that no races arise.
No comments:
Post a Comment