Monday, 10 February 2014

Data Communication Important Questions-Osmania University

Unit-1

  1. Write short notes on :Protocol Architecture.
  2. Discuss TCP / IP protocol layer architecture and functions in detail.(**)
  3. Write short note on  (c) Line Configuration.-5
  4. List and explain transmission impairments.
  5. Write short notes on any two of the following:
(a)    Pulse code modulation and delta modulation.

  1. Compare the three basic modulation techniques for transforming digital data into analog signals. (5)
  2. What is pulse stuffing? Explain how it is helpful in design of TEDM. (5)
  3. (b) Explain the concept of Delta Modulation.***
  4. (a) Draw the wave forms and explain the following coding schemes:

(a)    Bipolar-AMI (ii) Differential Manchester

  1. Explain Data Communications Interfacing with neat diagrams.
  2. Explain about Amplitude modulation and Angle modulation. 10
  3. Differentiate between Amplitude shift keying, frequency shift keying and phase shift keying. (4)
Unit:2
 


 
  1. Write about on error detecting and one error correcting codes. Explain with examples.
  2. Elaborate On data transmission techniques.
  3. Explain the Transmission modes, transmission characteristics and categories of applications of optical fiber. . (5)
  4. Explain the different phases of HDLC with suitable example. (5)
  5. What are the topology options for Fiber channel?
  6. Explain how the Go-Back-N Arq technique can handle different specific cases or contingencies.
  7. Discuss the mechanisms of sliding-window flow control.
  8. State the purpose of ARQ and explain stop and wait ARQ.
  9. Explain HD LC protocol in detail.****
  10. Explain HDLC frame structure.
  11. Explain in-detail Cyclic Redundancy Check (CRC) error detection technique with suitable examples. 10
  12. Write about Line-of-sight Transmission.
  13. A channel has a data rate of 4 kbps and a propagation delay of
 20 ms. For what range of frame sizes does stop-and-wait give an efficiency of at least 50% ?
 
Unit:3 
  1. Give architecture of ATM and explain its logical connections and cells.
  2. Describe Frame relay.
  3. Elaborate on various methods of multiplexing.
  4. Contrast the architecture of a traditional telephone network circuit switch with soft switch architecture. Explain how flexibility has been achieved in soft switch architecture.
  5. Draw the Event timing diagram to bring out differences between circuit switching and packet switching.
  6. What is the need for ATM adaptation layer? What are some of the protocols and services provided by the AAL ? (10)
  7. What are the characteristics of virtual channel convections?
  8. Draw and explain the ATM cell format 5
  9. Explain the concepts of packet switching and how routing is done in packet switching.
  10. Write short notes on any two of the following:
    1. ADSL
  11. Explain in detail about statistical time division multiplexing.
  12. Explain in detail about xDSL.
  13. The difficult problem in the design of a synchronous time division multiplexer is that of synchronizing the various data sources.How can this problem be overcome? Explain with suitable example
  14. What are the advantages and disadvantages of Frame Relay over X.25 ?
  15. What are the different categories and types of services that can handled by ATM network
  16. Give a comparison of the circuit switching, datagram packet switching and virtual circuit packet switching Techniques.
Unit:4

Discuss Ethernet and token ring by giving their frame formats. Explain the significance of each field.

 

Differentiate Ethernet and token ring.

(c) LANs. Transmission media.

(c) Layer 2 and Layer 3 switches

14. (a) Describe the operation of CSMAjCD.

(b) Write short notes on -virtual LANs.

(b) List various LAN topologies and explain the frame transmission in each LAN

topology. .. 6

ExplainLANprotocolarchitecture.

Explainfunctionsof aBridge. 10

a)      Layer 2 switches,

14. (a) Write about layer 2 and layer 3 switches.

(b) Explain briefly about Gigabit ethemet.

 

16. (a) Explain key elements such as topology, medium access control of LAN.

(I) Bridge protocol architecture (c) Fibre channel topologies

 

Unit:5
 
Discuss cellular wireless networks of third generation systems.
Describe IEEE 802.11.
How medium access control is done is wireless LANs?
discuss the  architecture and services defined by1 EEE 802.11.
Write short notes on any two of the following:
(a) Frequency hopping spread spectrum and direct sequence
spread spectrum.
(b) Typical call between two mobile users within an area controlled
by a single MTSO.
 
Explain Bluetooth architecture.
16. a) Give overview of operation of cellular systems.
15. Write about IEEE 802.11 architecture and services.
17. Write short notes on any two of the following:
(a) HDLC
(b) xDSL
(c) Overview of cellular systems.
Draw and explain the 1EEE 802.11 MAC Frame Format. (3)
What are the advantages and disadvantages of using CDMA for
cellular network?
 

Monday, 2 September 2013

Introduction to Real Time Operating System-RTOS

Embedded Computing
Please look around yourself. You can see many products included embedded computing systems inside themselves: VCR’s, digital watches, elevators, automobiles and automobile engines, thermostats, industrial control equipment, scientific and medical instruments, aircrafts and many other. Millions and millions of them are around yourself, each minute making their computations. People use the term embedded system to mean any computer system hidden in any of these products.
 Software for embedded systems must handle many problems beyond those found in application software for desktop or mainframe computers. Embedded systems often have several things to do at once. For example think that your are waiting for guests invited to your birthday and to your own flat (in cold and windy winter time) which has one door. What are you doing? Listening for the door bell. When the doorbell starts ringing, it is a signal for you to open the door for your guests. This is simple. Now think, that you live in a big house with many entrances and doors, suitable for your guests to come in. In this case if there is a doorbell ringing, you start your quick way to the relevant door. So far this is still simple. But lets assume, that two doorbells ring at the same time? Which way to run? The closest? May be. Another quests will wait. No problem so far. But if you have three doors? If your quests can wait, you must simply run more quickly. But if there must be your favorite quest among all of them (let say a queen of England) and she cant wait, what will you do?
 One solution would be to give her an invitation to go through the main door and if the main doorbell is ringing, in spite of all, run to open the main door. In this case the queen will wait the minimum possible time. But if this time is definitely described (let say 10 seconds for the queen) and you are alone in your house. So just in the time, when the queen rings the doorbell, you are in the far part of your house? Still a problem?
 Although this stuff seems to be not very serious as in the case of your house, but lets study the case of the aircraft like Boeing or Airbus. Thousands of doorbells and all of them are important. In such a case the maximum possible reaction time is very critical and to make a good peace of software for such kind of applications is not a trivial task. In computing we call the doorbell as -"interrupt" and the time you need to open the door is the "reaction time" of the operating system. In embedded computing we have similar problems to the described above. But we have a solution – real time operating systems or RTOS in the other way. Despite of similar name, most real time operating systems are rather different from desktop machine operating systems such as Windows or Unix. There are several main differences.


    
Desktop
RTOS
The operating system takes control of the machine as soon as it is turned on and then lets you to start your applications. You compile and link your applications separately from the operating system.
 You usually link your application and RTOS. At boot-up time your application usually gets control first, and then starts the RTOS.
Usually strong memory control and no recovery in the emergency case
RTOS usually do not control the memory but the whole system must recover anyway
Standard configuration
 Sharp configuration possibilities and also the possibility to choose the limited number of services because of the limit in the memory usage
The basic building block of software written under an RTOS is the task. Tasks are very simple to write: under most RTOSs a task is simply a subroutine. At some point in your program, you snake one or more calls to a function in the RTOS that starts tasks, telling it which subroutine is the starting point for each task and some other parameters that we’ll discuss later, such as the tasks priorityt where the RTOS should find memory for the task’s stack, and so on.
Most RTOSs allow you to have as many tasks as you could reasonably want. Each task in an RTOS is always in one of three states:
 1. Running - which means that the microprocessor is executing the instructions that make up this task. Unless yours is a multiprocessor system, there is only one microprocessor, and hence only one task that is in the running state at any given time.
 2. Ready - which means that some other task is in the running state but that this task has things that it could do if the microprocessor becomes available. Any number of tasks can be in this state.
 3. Blocked -which means that this task hasn’t got anything to do right now, even if the microprocessor becomes available.
Tasks get into this state because they are waiting for some external event. For example, a task that handles data coming in from a network will have nothing to do when there is no data. A task that responds to the user when he presses a button has nothing to do until the user presses the button. Any number ol tasks can be in this state as well. Most RTOSs seem to proffer a double handful of other task states. Included among the offerings are suspended, pended, waiting, dormant, and delayed. These usually just amount to fine distinctions among various subcategories of the blocked and ready states listed earlier. Here we’ll lump all task states into running, ready, and blocked.
 You can find out how these three states correspond with those of your RTOS by reading the manual that comes with it.
The Scheduler
 A part of the RTOS called the scheduler keeps track of the state of each task and decides which one task should go into the running state. Unlike the scheduler in Unix or Windows, the schedulers in most RTOSs are entirely simpleminded about which task should get the processor: they look at priorities you assign to the tasks, and among the tasks that are not in the blocked state, the one with the highest priority runs, and the rest of them wait in the ready state.
 The scheduler will not fiddle with task priorities: if a high-priority task hogs the microprocessor for a long time while lower-priority tasks are waiting in the ready state, that’s too bad. The lower-priority tasks just have to wait; the scheduler assumes that you knew what you were doing when you set the task priorities. Here we’ll adopt the fairly common use of the verb block to mean “move into the blocked stat ", the verb run to mean “move into the running state" or “be in the running state", and the verb switch to mean “change which task is in the running state". A task will only block because it decides for itself that it has run out of things to do. Other tasks in the system or the scheduler cannot decide for a task that it needs to wait for something. As a consequence of this, a task has to be running just betbre it is blocked: it has to execute the instructions that figure out that theres nothing more to do.  While a task is blocked, it never gets the microprocessor. Therefore, an interrupt routine or some other task in the system must be able to signal that whatever the task was waiting for has happened. Otherwise, the task will be blocked forever.
The shuffling of tasks between the ready and running states is entirely the work of the scheduler. Tasks can block themselves, and tasks and interrupt routines can move other tasks from the blocked state to the ready state, but the scheduler has control over the running state. (Of course, if a task is moved from the blocked to the ready state and has higher priority than the task that is running, the scheduler will move it to the running state immediately. We can argue about whether the task was ever really in the ready state at all, but this is a semantic argument. The reality is that some part of the application had to do something to the task - move it out of the blocked state - and then the scheduler had to make a decision).
Here are answers to some common questions about the scheduler and task states:
 - How does the scheduler know when a task has become blocked or unblocked?
The RTOS provides a collection of functions that tasks can call to tell the scheduler what events they want to wait for and to signal that events have happened. We’ll be discussing these functions later on.
What happens if all the tasks are blocked?
lf all the tasks are blocked, then the scheduler will spin in some tight loop somewhere inside of the RTOS, waiting for something to happen. If nothing ever happens, then that’s your fault. You must make sure that something happens sooner or later by having an interrupt routine that calls some RTOS function that unblocks a task. Otherwise, your software will not be doing very much.
 - What two tasks with the same priority are ready?
 The answer to this is all over the map, depending upon which RTOS you use. At least one system solves this problem by making it illegal to have two tasks with the same priority. Some other RTOSs will time slice between two such tasks. Some will run on of them until it blocks and then run the other. In this case, which of the two tasksit runs also depends upon the particular RTOS.
 - If one task is running and another, higher priority task unblocks, does the task that is running get stopped and moved to the ready state right away?
 A preemptive RTOS will stop a lower-priority task as soon as the higher-priority task unblocks. A nonpreemptive RTOS will only take the microprocessor away from the lower-priority task when that task blocks.
Each task has its own private context, which includes the register values, a program counter, and a stack. However, all other data - g1obal, static, initialized, uninitialized, and everything else - is shared among all of the tasks in the system. The RTOS typically has its own private data structures, which are not available to any of the tasks. Since you can share data variables among tasks, it is easy to move data from one task to another: the two tasks need only have access to the same variables.
You can easily accomplish this by having the two tasks in the same module in which the variables are declared, or you can make the variables public in one of the tasks and declare them extern in the other.
Shared – Data Problems
If we have two tasks sharing the same data, it could happen that one of this tasks will read the half-changed data.
 Reentrancy
 Reentrant functions are functions that can be called by more than one task and that will always work correctly even if the RTOS switches from one task to another in the middle of executing the function.
 You apply three rules to decide if a function is reentrant:
 1. A reentrant function may not use variables in a nonatomic way unless they are stored on the stack of the task that called. the function or are otherwise the private variables of that task.
2. A reentrant function may not call any other functions that are not themselves reentrant.
3. A reentrant function may not use the hardware in a nonatomic way.
To better understand reentrancy, and in particular rule l above, you must first understand where the C compiler will store variables. If you are a C language guru, you can skip the following discussion of where variables are stored in memory. If not, review your knowledge of C by examining the example and answering these questions: Which of the variables are stored on the stack and which in a fixed location in memory? What about the string literal “Where does this string go?” What about the data pointed to by vPointer? By parm_ptr?
 static int static_int;
 int public_int;
int initialized = 4;
 char *string = “Where does this string go?”;
void *vPointer;
 void function (int parm, int *parm_ptr)
 {
static int static_local;
 int local;
 . . .
}
Here are the answers:
static_int - is in a fixed location in memory and is therefore shared by any task that happens to call function.
public_int - Ditto. The only differenice be-tween static_int and public_int is that functions in other C files can access public_int, but they cannot access static_int.(This means, of course, that it is even harder to be sure that this variable is not used by multiple tasks, since it might be used by any function in any module anywhere in the-system.)
initialized - The same. The initial value makes no difference to where the variable is stored.
string - The same.
“Where does this string go?" - Also the same.
vPointer - The pointer itself is in a fixed location in memory and is therefore a shared variable. If function uses or changes the data values pointed to by vPointer, then those data values are also shared among any tasks that happen to call function.
parm - is on the stack. If more than one task calls function, parm will be in a different location for each, because each task has its own stack. No matter how many tasks call function, the variable parm will not be a problem.
parm_ptr - is on the stack. Therefore, function can do anything to the value of parm_ptr without causing trouble. However, if function uses or changes the values of whatever is pointed to by parm_ptr, then we have to ask where that data is stored before we know whether we have a problem. We can’t answer that question just by looking at the code in lf we look at the code that calls function and can be sure that every task will pass a diderent value for parm_ptr, then all is well. If two tasks might pass in the same value for parm_ptr, then there might be trouble.
static_local - is in a fixed location in memory. The only difference between this and static_int is that static_int can be used by other functions in the same C file, whereas static_local can onlylbe used by function.
local - is on the stack.
In the lasf section, we discussed how the RTOS can cause a new class of shared - data problems by switching the microprocessor from task to task and, like interrupts, changing the flow of execution. The RTOS, however, also gives you some new tools with which to deal with this problem. Semaphores are one such tool.
RTOS semaphores
Back in the bad old days, the railroad barons discovered that it was bad for business if their trains ran into one another. Their solution to this problem was to use signals called “semaphores.”
When the first train enters the pictured section of track, the semaphore behind it automatically lowers. When a second train arrives, the engineer notes the lowered semaphore, and he stops his train and waits for the semaphore to rise; When the first train leaves that section of trac, the semaphore rises, and the engineer on the second train knows that it is safe to proceed on. There is no possibility of the second train running into the first one.
The general idea of a semaphore in an RTOS is similar to the idea of railroad semaphore. Trains do two things with seniaphores. First, when train leaves the protected section of track, it raises the semaphore. Second, when a train comes to a semaphore, it waits for the semaphore to rise, if necessary, passes through the (now raised) semaphore, and lowers the semaphore. The typical semaphore in an RTOS works rnuch the same way. RTOS Semaphores Although the word was originally coined for particular concept, the word semaphore is now on of the most slippery in the embedded-systems world. It seems to mean almost as many different things as there are software engineers, or at least as there are RTOSs.
Some RTOSs even have more than one kind of semaphore. Also, no RTOS uses the terms raise and lower; they use get and give, take and release, pend and post, p and v, wait and signal, and any number of other combinations. We will use take (for lower) and release (for raise). We’ll discuss first kind of semaphore most commonly called a binary semaphore, which is the kind most similar to the railroad semaphore; we’ll mention a few variations-below.
A typical RTOS binary semaphore works like this: tasks can call two RTOS functions, TakeSemaphore and ReleaseSemaphore. If one task has called TakeSemaphore to take the semaphore and has not called ReleaseSemaphore to release it, then any other task that calls TakeSemaphore will block until the first task calls ReleaseSemaphore. Only one task can have the semaphore at a time.
Multiple Semaphores
All the semaphore functions take a parameter that identifies the semaphore that is being initialized, lowered, or raised. Since most RTOSs allow you to have as many semaphores as you like, each call to the RTOS must identify the semaphore on which to operate. The semaphores are all independent of one another: if one task takes semaphore A, another task can take semaphore B without blocking. Similarly, if one task is waiting for semaphore C, that task will still be blocked even if some other task releases semaphore D.
What’s the advantage of having multiple semophores? Whenever a task takes a semaphore, it is potentially slowing the response of any other task that needs the same semaphore. In a system with only one semaphore, if the lowest-priority task takes the semaphore to change data in a shared array, the highest - priority task might block waiting for that semaphore, even if the highest-priority task wants to modify any other data and couldn’t care less about a data in a shared array. By having one semaphore protect the data in a shared array and different semaphore to protect other shared data, you can build your system so the highest priority task can modify it’s data even if the lowest priority task has taken the semaphore protecting it’s shared data.
Different semaphores can correspond to different shared resources. How does the RTOS know which semaphore protects which data? It doesn’t. If you are using multiple semaphores, it is up to' you to remember which semaphore corresponds to which data. A task that is modifying one share data must take the corresponding semaphore. You must decide what shared data each of your semaphores protects.
Another common use of semaphores is as a simple way to communicate from one task to another or from an interrupt routine to a task. For example, suppose that the task that formats printed reports builds those reports into a fixed memory buffer. Suppose also that the printer interrupts after each line, and that the printer interrupts routine feeds the next line to the printer each time it interrupts. In such a system, after formatting one report into the fixed buffer, the task must wait until the interrupt routine has finished printing that report before it can format the next report. One way to accomplish this fairly easily is to have the task wait for a semaphore after it has formatted each report. The interrupt routine signals the task when the report has been fed to the printer by releasing the semaphore; when the task gets the semaphore and unblocks, it knows that it can format the next report. Most RTOSs allow you to initialize semaphores in this way. When the task formats the first report and tries to take the semaphore, it blocks. The interrupt routine will release the semaphore and thereby unblock the task when the report is printed.
Semaphore Problems
When first reading about semaphores, it is very tempting to conclude that they represent the solutions to all of our shared data problems. This is not true. In fact, your systems will probably work better, the fewer times you have to use semaphores. The problem is that semaphores work only if you use them perfectly, and there are no guarantees that you (or your- coworkers) will do that. There are any numbers of tried – and - true ways to mess up with semaphores:
Forgetting to take the semaphore. Semaphores only work if every task that accesses the shared data, for read or for write, uses the semaphore. If anybody forgets, then the RTOS may switch away from the code that forgot to take the semaphore and cause an ugly shared-data bug .
Forgetting to release the semaphore. If any task fails to release the semaphore, then every other task that ever uses the semaphore will sooner or later block waiting to take that semaphore and will be blocked forever.
Taking the wrong semaphore. If you are using multiple semaphores, then taking the wrong one is as bad as forgetting to take one.
Holding a semaphore for too long. Whenever one task takes a semaphore, every other task that subsequently wants that semaphore has to wait until the semaphore is released. If one task takes the semaphore and then holds it for too long, other tasks may miss rea1 - time deadlines.
A particularly perverse instance of this problem can arise if the RTOS switches from a low-priority task (call it Task C) to a medium priority task (call it Task B) after Task C has taken an semaphore. A high-priority task (call it Task A) that wants the semaphore then has to wait until Task B gives up the microprocessor: Task C can’t release the semaphore until it gets the microprocessor back. No matter how carefully you code Task C, Task B can prevent Task C from releasing the semaphore and can thereby hold up Task A indefinitely. This problem is called priority inversion; some RTOSs resolve this problem with priority inheritance - they temporarily boost the priority of Task C to that of Task A whenever Task C holds the semaphore and Task A is waiting for it.
Semaphore Variants
There are a number of different kinds of semaphores. Here is an overview of some of the more common variations:
Some systems offer semaphores that can be taken multiple times. Essentially, such semaphores are integers; taking them decrements the integer and releasing them increments the integer. If a task tries to take the semaphore when the integer is equal to zero, then the task will block. These semaphores are called counting semaphores, and they were the original type of semaphore.
Some systems offer semaphores that can be released only by the task that took them. These semaphores are useful for the shared - data problem, but they cannot be used to communicate between two tasks. Such semaphores are sometimes called resource semaphores or resources.
Some RTOSs offer one kind of semaphore that will automatically deal with the priority inversion problem and another that will not. The former kind of semaphore commonly called a mutex semaphore or mutex. (Other RTOSs offer semaphores that they call mutexes but that do not deal with priority inversion.)
If several tasks are waiting for a semaphore when it is released, systems vary as to which task gets to run. Some systems will run the task that has been waiting longest; others will run the highest - priority task that is waiting for the semaphore. Some systems give you the choice.
Ways to Protect Shared Data
We have discussed two ways to protect shared data: disabling interrupts and using semaphores. There is a third way that deserves at least a mention: disabling task switches. Most RTOSs have two functions you can call, one to disable task switches and one to reenable them after they’ve been disabled. As is easy to see, you can protect shared data from an inopportune task switch by disabling task switches while you are reading or writing the shared data.
Here’s a comparison of the three methods of protecting shared data:
1. Disabling interrupts is the most drastic in that it will affect the response times of all the interrupt routines and of all other tasks in the system. (If you disable interrupts, you also disable task switches, because the scheduler cannot get control of the microprocessor to switch.) On the other hand, disabling interrupts has two advantages. (1) It is the only method that works if your data is shared between your task code and your interrupt routines. Interrupt routines are not allowed to take semaphores, as we will discuss in the next chapter, and disabling task switches does not prevent interrupts. (2) It is fast. Most processors can disable or enable interrupts with a sing1e instruction; all of the RTOS functions are many instructions long. If a task’s access to shared data lasts only a short period of time - incrementing a single variable for example - sometimes it is preferable to take the shorter hit on interrupt service response than to take the longer hit on task response that you get from using a semaphore or disabling task switches.
2. Taking semaphores is the most targeted way to protect data, because it affects only those tasks that need to take the same semaphore. The response times of interrupt routines and of tasks that do not need the semaphore are unchanged. On the other hand, semaphores do take up a certain amount of microprocessor time - albeit not much in most RTOSs - and they will not work for interrupt routines.
3. Disabling task switches is somewhere in between the two. It has no effect on interrupt routines, but it stops response for all other tasks cold.
Tasks must be able to communicate with one another to coordinate their activities or to share data. For example, in the underground tank monitoring system the task that calculates the amount of gas in the tanks must let other parts of the system know how much gasoline there is. In Telegraph, the system we discussed earlier that connects a serial-port printer to a network, the tasks that receive data on the network must hand that data off to other tasks that pass the data on to the printer or that determine responses to send on the network.
We also discussed using shared data and semaphores to allow tasks to communicate with one another. In this section we will discuss several other methods that most RTOSs other: queues, mailboxes, and pipes.
Here’s a very simple example. Suppose that we have two tasks, Task1 and Task2, each of which has a number of high-priority, urgent things to do. Suppose also that from time to time these two tasks discover error conditions that must be reported on a network, a time consuming process. In order not to delay Task1 and Task2, it makes sense to have a separate task, Errors Task that is responsible for reporting the error conditions on the network. Whenever Task1 or Task2 discovers an error, it reports that error to ErrorsTask and then goes on about its own business. The error reporting process undertaken by ErrorsTask does not delay the other tasks. An RTOS queue is the way to implement this design.
Some Ugly Details.
As you’ve no doubt guessed, queues are not quite simple. Here are some of the complications that you will have to deal with in most RTOSs:
Most RTOSs require that you initialize your queues before you use them, by calling a function provided for this purpose. On some systems, it is also up to you to allocate the memory that the RTOS will manage as a queue. As with semaphores, it makes most sense to initialize queues in some code that is guaranteed to run before any task tries to use them.
Since most RTOSs allow you to have as many queues as you want, you pass an additional parameter to every queue function: the identity of the queue to which you want to write or from which you want to read. Various systems do this in various ways.
If your code tries to write to a queue when the queue is full, the RTOS must either return an error to let you know that the write operation failed (a more common RTOS behavior) or it must block the task until some other task reads data from the queue and thereby creates some space (a less common RTOS behavior). Your code must deal with whichever of these behaviors your RTOS exhibits.
Many RTOSs include a function that will read from a queue if there is any data and will return an error code if not. This function is in addition to the one that will block your task if the queue is empty.
The amount of data that the RTOS lets you write to the queue in one call may not be exactly the amount that you want to write. Many RTOSs are inflexible about this. One common RTOS characteristic is to allow you to write onto a queue in one call the number of bytes taken up by a void pointer.
Mailboxes
In general, mailboxes are much like queues. The typical RTOS has functions to create, to write to, and to read from mailboxes, and perhaps functions to check whether the mailbox contains any messages and to destroy the mailbox if it is no longer needed. The details of mailboxes, however, are different in different RTOSs.
Here are some of the variations that you might see:
Although some RTOSs allow a certain number of messages in each mailbox, a number that you can usually choose when you create the mailbox, others allow only one message in a mailbox at a time. Once one message is written to a mailbox under these systems, the mailbox is full; no other message can be written to the mailbox until the first one is read.
In some RTOSs, the number of messages in each mailbox is unlimited. There is a limit to the total number of messages that can be in all of the mailboxes in the system, but these messages will be distributed into the individual mailboxes as they are needed.
In some RTOSs, you can prioritize mailbox messages. Higher-priority messages will be read before lower-priority messages, regardless of the order in which they are written into the mailbox.
Pipes
Pipes are also much like queues. The RTOS can create them, write to them, read from them, and so on. The details of pipes, however, like the details of mailboxes and queues, vary from RTOS to RTOS. Some variations you might see include the following:
Some RTOSs allow you to write messages of varying lengths onto pipes (unlike mailboxes and queues, in which the message length is typically fixed).
Pipes in some RTOSs are entirely byte-oriented: if Task A writes 11 bytes to the pipe and then Task B writes 19 bytes to the pipe, then if Task C reads 14 bytes from the pipe, it will get the 11 that Task A wrote plus the first 3 that Task B wrote. The other 16 that task B wrote remain in the pipe for whatever task reads from it next.
Some RTOSs use the standard C library functions fread and fwrite to read from and write to pipes.
Which Should I Use?
Since queues, mailboxes, and pipes vary so much from one RTOS to another, it is hard to give much universal guidance about which to use in any given situation. When RTOS vendors design these features, they must make the usual programming trade-off’s among flexibility, speed, memory space, the length of time that interrupts must be disabled within the RTOS functions, and so on. Most RTOS vendors describe these characteristics in their documentation; read it to determine which of the communications mechanisms best meets your requirements.
Pitfalls
Although queues, mailboxes, and pipes can make it quite easy to share data among tasks, they can also make it quite easy to insert bugs into your system. Here are a few tried-and-true methods for making yourself some trouble:
Most RTOSs do not restrict which tasks can read from or write to any given queue, mailbox, or pipe. Therefore, you must ensure that tasks use the correct one each time. If some task writes temperature data onto a queue read by a task expecting error codes, your system will not work very well. This is obvious, but it is easy to mess up.
The RTOS cannot ensure that data written onto a queue, mailbox, or pipe will be properly interpreted by the task that reads it. If one task writes an integer onto the queue and another task reads it and then treats it as a pointer, your product will not ship until the problem is found and fixed.
Running out of space in queues, mailboxes, or pipes is usually a disaster for embedded software. When one task needs to pass data to another, it is usually not optional. Good solutions to this problem are scarce. Often, the only workable one is to make your queues, mailboxes, and pipes large enough in the first place.
Passing pointers from one task to another through a queue, mailbox, or pipe is one of several ways to create shared data inadvertently.
Most embedded systems must keep track of the passage of time. To extend its battery life, the cordless bar-code scanner must turn itself off after a certain number of seconds. Systems with network connections must wait for acknowledgements to data that they have sent and retransmit the data if an acknowledgement doesn’t show up on time. Manufacturing systems must wait for robot arms to move or for motors to come up to speed.One simple service that most RTOSs offer is a function that delays a task for a period of time; that is, blocks it until the period of time expires.
Questions
How do I know that the taskDelay function takes a number of milliseconds as its parameter? You don’t. In fact, it doesn’t. The taskDelay function in VxWorks, like the equivalent delay function in most RTOSs, takes the number of system ticks as its parameter. The length of time represented by each system tick is something you can usually control when you set up the system.
How accurate are the delays produced by taskDelay function? They are accurate to the nearest system tick. The RTOS works by setting up a single hardware timer to interrupt periodically, say, every millisecond, and bases all timings on that interrupt. This timer is often called the heartbeat timer. For example, if one of your tasks passes 3 totaskDelay, that task will block until the heartbeat timer interrupts three times The first timer interrupt may come almost immediately after the call to taskDelay or it may come after just under one tick time or after any amount of time between those two extremes. The task will therefore be blocked for a period of time that is between just a hair more than two system ticks and just a hair less than three.(Note that the task will unblock when the delay time expires; when it will run depends as always upon what other, higher-priority tasks are competing for the microprocessor at that time.)
How does the RTOS know how to set up the timer hardware on my particular hardware? As we discussed earlier, it is common for microprocessors used in embedded systems to have timers in them. Since RTOSs, like other operating systems, are microprocessor-dependent, the engineers writing the RTOS know what kind of microprocessor the RTOS will run on and can therefore program the timer on it. If you are using nonstandard timer hardware, then you may have to write your own timer setup software and timer interrupt routine. The RTOS will have an entry point for your interrupt routine to call every time the timer expires. Many RTOS vendors provide board support packages or BSPs, which contain driver software for common hardware components – such as timers - and instructions and model code to help you write driver software for any special hardware you are using.
What is a “normal" length for the system tick? There really isn’t one. The advantage of a short system tick is that you get accurate timings. The disadvantage is that the microprocessor must execute the timer interrupt routine frequently. Since the hardware timer that controls the system-tick usually runs all the time, whether or not any task has requested timing services, a short system tick can decrease system throughput quite considerably by increasing the amount of microprocessor time spent in the timer interrupt routine. Real - time system designers must make this trade-off.
What if my system needs extremely accurate timing? You have two choices. One is to make the system tick short enough that RTOS timings fit your definition of “extremely accurate”. The second is to use a separate hardware timer for those timings that must be extremely accurate. It is not uncommon to design an embedded system that uses dedicated timers for a few accurate timings and uses the RTOS functions for the many other timings that need not be so accurate. The advantage of the RTOS timing functions is that one hardware timer times any number of operations simultaneously.
Other Timing Services
Most RTOSs offer an array of other timing services, all of them based on the system tick. For example, most allow you to limit how long a task will wait for a message from a queue or a mailbox, how long a task will wait for a semaphore, and so on. Although these services are occasionally useful, exercise some caution. For example, if you set a time limit when your high - priority task attempts to get a semaphore and if that time limit expires, then your task does not have the semaphore and cannot access the shared data. Then you’ll have to write code to allow your task to recover. Before writing this code which is likely to be difficult, since your task needs to use the data but can’t - it may make sense to ask whether there might not be a better design. If your high-priority task is in such a hurry that it cannot wait for the semaphore, perhaps it would make more sense to send instructions about using the shared data through a mailbox to a lower-priority task and let the higher-priority task get on with its other work.
A rather more useful service offered by many RTOSs is to call the function of your choice after a given number of system ticks. Depending upon the RTOS, your function may be called directly from the timer interrupt service routine, or it may be called from a special, high-priority task within the RTOS
Another service many RTOSs offer is the management of events within the system. An event is essentially a Boolean flag that tasks can set or reset and that other tasks can wait for. For example, when the user pulls the trigger on the cordless bar-code scanner, the task that turns on the laser scanning mechanism and tries to recognize the bar-code must start. Events provide an easy way to do this: the interrupt routine that runs when the user pulls the trigger sets an event for which the scanning task is waiting. If you are familiar with the word “event" in the context of regular operating systems, you can see that it means something ditterent in the RTOS context.
Some standard features of events are listed below:
More than one task can block waiting for the same event, and the RTOS will unblock all of them (and run them in priority order) when the event occurs. For example, if the radio task needs to start warming up the radio when the user pulls the trigger, then that task can also wait on the trigger-pull event.
RTOSs typically form groups of events, and tasks can wait for any subset of events within the group. For example, an event indicating that the user pressed a key on the scanner keypad might be in the same group with the trigger-pull event. If the radio task needs to wake up both for a key and for the trigger, it can do that. The scanning task will wake up only for the trigger event.
Different RTOSs deal in different ways with the issue of resetting an event after it has occurred and tasks that were waiting for it have been unblocked. Some RTOSs reset events automatically; others require that your task software do this. It is important to reset events: if the trigger-pull event is not reset, for example, then tasks that need to wait for that event to be set will never again wait.
A Brief Comparison of the Methods for Intertask Communication
We have discussed using queues, pipes, mailboxes, semaphores, and events for communication between two tasks or between an interrupt routine and a task. Here is a comparison of these methods:
Semaphores are usually the fastest and simplest methods. However, not much information can pass through a semaphore, which passes just a 1-bit message saying that it has been released.
Events are a little more complicated than semaphores and take up just a hair more microprocessor time than semaphores. The advantage of events over semaphores is that a task can wait for any one of several events at the same time, whereas it can only wait for one semaphore. (Another advantage is that some RTOSs make it convenient to use events and make it inconvenient to use semaphores for this purpose.)
Queues allow you to send a lot of information from one task to another. Even though the task can wait on only one queue (or mailbox or pipe) at a time, the fact that you can send data through a queue make it even more flexible than events. The drawbacks are (1) putting messages into and taking messages out of queues is more microprocessor-intensive and (2) that queues offer you many more opportunities to insert bugs into your code. Mailboxes and pipes share all of these characteristics.
Most RTOSs have some kind of memory management subsystem. Although some offer the equivalent of the C library functions malloc and free, real-time systems engineers often avoid these two functions because they are typically slow and because their execution times are unpredictable. They favor instead functions that allocate and free fixed-size buffers, and most RTOSs offer fast and predictable functions for that purpose.
The MultiTask! system is a fairly typical RTOS in this regard: you can set up pools, each of which consists of some number of memory buffers. In any given pool, all of the buffers are the same size. The reqbuf and getbut functions allocate a. memory buffer from a pool. Each returns a pointer to the allocated buffer; the only difference between them is that if no memory buffers are available, get but will block the task that calls it, whereas reqbuf will return a NULL pointer right away.
void *getbuf (unsigned int uPoolId, unsigned int uTimeout);
void *reqbuf (unsigned int uPoolId);
In each of these functions, the uPoolId parameter indicates the pool from which the memory buffer is to be allocated. The uTimeout parameter in getbuf indicates the length of time that the task is willing to wait for a buffer if none are free. The size of the buffer that is returned is determined by the pool from which the buffer is allocated, since all the buffers in anyone pool are the same size. The tasks that call, these functions must know the sizes of the buffers in each pool.
The relbuf function frees a memory buffer.
void relbuf (unsigned int uPoolId, void *p_vBuffer);
Note that relbuf does not check that p_vBufter really points to a buffer in the pool indicated by uPoolId. If your code passes an invalid value for p_vBuffer, the results are usually catastrophic.
The MultiTask! system is also typical of many RTOSs in that it does not know where the memory on your system is. Remember that in most embedded systems, unlike desktop systems, your software, not the operating system, gets control of a machine first. When it starts, the RTOS has no way of knowing what memory is free and what memory your application is already using. MultiTask! will manage a pool of memory buffers for you, but you must tell it where the memory is. The init_mem_pool function allows you to do this.
int init_mem_pool (
unsigned int uPoolId,
void *p_vMemory,
unsigned int.uBufSize,
unsigned int uBufCount,
unsigned int uPoolType
);
The uPoolId parameter is the identifier you will use in later calls to getbuf, reqbuf, and relbuf. The p_vMemory parameter points to the block of memory to use as the pool; you must make sure that it points to available memory. The uBufSize and uBufCount parameters indicate how large each buffer is and how many of them there are the pool. (The uPoolType parameter indicates whether these buffers will be used by tasks or by interrupt routines. This distinction is peculiar to MultiTask!, and we will not discuss it here.) The picture shows how this function allocates the pool of memory buffers.
Memory management
Interrupt routines in most RTOS environments must follow two rules that do not apply to task code.
Rule 1. An interrupt routine must not call any RTOS function that might block the caller. Therefore, interrupt routines must not get semaphores, read from queues or mailboxes that might be empty, wait for events, and so on. If an interrupt routine calls an RTOS function and gets blocked, then, in addition to the interrupt routine, the task that was running when the interrupt occurred will be blocked, even if that task is the highest- priority task. Also, most interrupt routines must run to completion to reset the hardware to be ready for the next interrupt.
Rule 2. An interrupt routine may not call any RTOS function that might cause the RTOS to switch tasks unless the RTOS knows that an interrupt routine, and not a task, is executing. This means that interrupt routines may not write to mailboxes or queues on which tasks may be waiting, set events, release semaphores, and so on - unless the RTOS knows it is an interrupt routine that is doing these things.
If an interrupt routine breaks this rule, the RTOS might switch control away from the interrupt routine (which. the RTOS think is a task) to run another task, and the interrupt routine may not complete for a long time, blocking at least all lower-priority interrupts and possibly all interrupts.
In the next few figures, we’ll examine these rules.
Rule 1: No Blocking
Int. routine 1
In Figure we examine the software for the control of the nuclear reactor. This time, the task code and the interrupt routine share the temperature data with a semaphore. This code will not work. It is in violation of rule 1. If the interrupt routine happened to interrupt vTaskTestTemperatures while it had the semaphore, then when the interrupt routine called GetSemaphore, the RTOS wou1d notice that the semaphore was already taken and block. This will stop both the interrupt routine and vTaskTestTemperatures (the task that was interrupted), after which the system would grind to a halt in a sort of one-armed deadly embrace. With both the interrupt routine and vTaskTestTemperatures blocked, no code will ever release the semaphore.
(Some RTOSs have an alternative - and equally useless - behavior in this situation: when the interrupt routine calls GetSemaphore, these RTOSs notice that vTaskTestTemperatures already has the semaphore and, since they think that vTaskTestTemperatures is still running, they let the interrupt routine continue executing. In this case, the semaphore no longer protects the data properly.)
Even if the interrupt routine interrupts some other task, this code can cause problems. If vTaskTestTemperatures has the semaphore when the interrupt occurs, then, when the interrupt routine tries to get the semaphore too, it will block (along with whatever task was running when interrupt occurred). For as long as the interrupt routine is blocked and that may be for a long time if vTaskTestTemperatures does not get the microprocessor back to allow it to release the semaphore all lower-priority interrupt routines and the task that was unfortunate enough to be interrupted will get no microprocessor time.
Some RTOSs contain various functions that never block. For example, many have a function that returns the status of a semaphore. Since such a function does not block, interrupt routines can call it (assuming that this is in compliance with rule 2, which it usually is).
Rule 2: No RTOS Calls without Fair Warning
To understand rule 2, examine figure above, a naive view of how an interrupt routine should work under an RTOS. The graph shows how the microprocessor’s attention shifted from one part of the code to another over time. The interrupt routine interrupts the lower-priority task, and, among other things, calls the RTOS to write a message to a mailbox (legal under rule l, assuming that function can’t block). When the interrupt routine exits, the RTOS arranges for the microprocessor to execute either the original task, or, if a higher-priority task was waiting on the mailbox, that higher priority task.
Figure below shows what really happens, at least in the worst case. If the higher-priority task is blocked on the mailbox, then as soon as the interrupt routine writes to the mailbox, the RTOS unblocks the higher-priority task. Then the RTOS (knowing nothing about the interrupt routine) notices that the task that it thinks is running is no highest-priority task that is ready to run. Therefore, instead of returning to the interrupt routine (which the RTOS thinks is part of the lower priority task), the RTOS switches to the higher-priority task. The interrupt routine doesn’t get to finish until later.
RTOSs use various methods for solving this problem, but all require your cooperation. Figure shows the first scheme. In it, the RTOS intercepts all the interrupts and then calls your interrupt routine. By doing this, the RTOS finds out when an interrupt routine has started. When the interrupt routine later writes to mailbox, the RTOS knows to return to the interrupt routine and not to switch tasks, no matter what task is unblocked by the write to the mailbox. When the interrupt routine is over, it returns, and the RTOS gets control again. The RTOS scheduler then figures out what task should now get the microprocessor.
If your RTOS uses this method, then you will need to call some function within the RTOS that tells the RTOS where your interrupt routines are and which hardware interrupts correspond to which interrupt routines.
Pict 1

Pict. 2
Figure shows an alternative scheme, in which the RTOS provides a function that the interrupt routines call to let the RTOS know that an interrupt routine is running. After the call to that function, the RTOS knows that an interrupt routine is in progress, and when the interrupt routine writes to the mailbox the RTOS always returns to the interrupt routine, no matter what task is ready, as in the figure. When the interrupt routine is over, it jumps to or calls some other function in RTOS, which calls the scheduler to figure out what task should now get the microprocessor. Essentially, this procedure disables the scheduler for the duration of the interrupt routine.

Pict. 3
In this plan, your interrupt routines must call the appropriate RTOS functions at the right moments.

We will furthermore discuss the environment RTOS in details. We will discuss the concept of a task, the share data problem, semaphores and some more issues. Commercial RTOS are available from numerous well known vendors such as VxWorks, VRTX, pSOS, Nucleus, C Executive, LynxOS, QNX, MultiTask!, AMX, and so on. The main standard is called POSIX.
Tasks and States
Tasks and Data
Semaphores and Shared Data
Semaphores as a Signaling Device
Message Queues, Mailboxes, and Pipes
Timer Functions
Events
Memory management
Interrupt Routines in an RTOS Environment - Part 1
Interrupt Routines in an RTOS Environment - Part 2

Embedded systems Lab-Viva-Questions

Embedded Systems Course Objectives
Course: (CS 432) Embedded systems Lab

Viva-Questions
1. What is watchdog timer?
2.  What is semaphore?
3.  What is mutex?
4.  Can structures be passed to the functions by value?
5.  Why cannot arrays be passed by values to functions?
6.  Advantages and disadvantages of using macro and inline functions?
7.  What happens when recursion functions are declared inline?
8.  Scope of static variables?
9.  What is the difference between a ‘thread’ and a ‘process’?
10. Explain the working of Virtual Memory?
11. What is Concurrency? Explain with example Deadlock and Starvation.
12. What is the difference between fifo and the memory?
13. Is it necessary to start the execution of a program from the main() in C?
14. What is an anti aliasing filter? Why is it required?
15. How to implement a fourth order Butterworth LP filter at 1kHz if sampling frequency is 8 kHz?
16. IS 8085 an embedded system?
17. What is the role of segment register?
18. What type of registers contains an (INTEL) CPU?
19. What is plc system?
20. What is difference between micro processor & micro controller?
21. Can we use semaphore or mutex or spin lock in interrupt context in linux kernel?
22. DMA deals with which address (physical/virtual addresses)?
23. What is dirac delta function and its Fourier transform and its importance?
24. What is the difference between testing and verification of vlsi circuit?
25. While writing interrupt handlers (ISR), which are points needed to be considered?
26. Explain can microcontroller work independently?
27. Explain What happens when recursion functions are declared inline?
28. Explain Scope of static variables?
29. What is interrupt latency?
30. Explain Operations involving unsigned and signed? Unsigned will be converted to signed?
31. Explain Order of constructor and destructor call in case of multiple inheritance?
32. Explain Difference between object oriented and object based languages?
33. What are the advantages and disadvantages of using macro and inline functions?
34. Explain why cannot arrays be passed by values to functions?
35. Explain what is interrupt latency? How can we reduce it?
36. Explain what are the different qualifiers in C?
37. Explain What are the 5 different types of inheritance relationship?
38. Explain What will this return malloc(sizeof(-10))?
39. Explain Can structures be passed to the functions by value?
40. Explain can we have constant volatile variable?
41. Explain what are the different storage classes in C?
42. Explain what is forward reference w.r.t. pointers in c?
43. How is function itoa() written in C?
44. Explain what is the difference between embedded systems and the system in which RTOS is running?
45. How to define a structure with bit field members?
46. Explain what is interrupt latency?
47. Explain Scope of static variables?
48. What is pass by value and pass by reference? How are structure passed as arguments? 
49. What is difference between using a macro and a in line function? 
50. What is the volatile keyword used for? 
51. What are hard and soft Real time systems? 
52. What is a semaphore? what are the different types of semaphore? 
53. Write a constant time consuming statement lot finding out If a given number Is a power of 2? 
54. What are recursive functions? Can we make them in line? 
55. What is the size of the int, char and float data types? 
56. What does malloc do? What will happen if we have a statement like malloc(sizeof(0));
57. What is meant by a forward reference in C? 
58. What is the order of calling for the constructors and destructors in case of objects of inherited classes? 
59. Explain the properties of  a Object oriented programming language. 
60. What do you mean by interrupt latency? 
61. What typecast is applied when we have a signed and an unsigned int in an expression? 
62. How are variables mapped across to the various memories by the C compiler? 
63. What is a memory leak? What is a segmentation fault? 
64. What is ISR? Can they be passed any parameter and can they return a value? 
65. a=7; b=8; x=a++-b; printf(“%d”, x ); What does this code give as output? 
66. What are little endian and big endian types of storage? How can you identify which type of allocation a system follows? 
67. What is the scope of a function that is declared as static? 
68. What is the use of having the const qualifier? 
69. Why do we need a infinite loop in embedded systems development? What are the different ways by which you can code in a infinite loop? 
70. What is the difference between embedded systems and the system in which rtos is running?