ieee antennas propagation society engineers education students

antenna signal processing radio astronomy engineering space communication

wireless mobile satellite telecommunications applied optics electromagnetic waves

menu

ieee-logo-black2

Tapan K. Sarkar

Prof. Tapan K. Sarkar
Department of Electrical Engineering and Computer Science
Syracuse University
323 Link Hall, Syracuse, NY 13244-1240
This email address is being protected from spambots. You need JavaScript enabled to view it.

Tapan K. Sarkar received the B.Tech. degree from the Indian Institute of Technology, Kharagpur, in 1969, the M.Sc.E. degree from the University of New Brunswick, Fredericton, NB, Canada, in 1971, and the M.S. and Ph.D. degrees from Syracuse University, Syracuse, NY, in 1975. From 1975 to 1976, he was with the TACO Division of the General Instruments Corporation. He was with the Rochester Institute of Technology, Rochester, NY, from 1976 to 1985. He was a Research Fellow at the Gordon McKay Laboratory, Harvard University, Cambridge, MA, from 1977 to 1978. He is now a Professor in the Department of Electrical and Computer Engineering, Syracuse University. His current research interests deal with numerical solutions of operator equations arising in electromagnetics and signal processing with application to system design.

 

He obtained one of the “best solution” awards in May 1977 at the Rome Air Development Center Spectral Estimation Workshop. He received the Best Paper Award of the IEEE Transactions on Electromagnetic Compatibility in 1979 and in the 1997 National Radar Conference. He has authored or coauthored more than 300 journal articles and numerous conference papers and 32 chapters in books and fifteen books, including his most recent ones, Iterative and Self Adaptive Finite-Elements in Electromagnetic Modeling (Boston, MA: Artech House, 1998), Wavelet Applications in Electromagnetics and Signal Processing (Boston, MA: Artech House, 2002), Smart Antennas (IEEE Press and John Wiley & Sons, 2003), History of Wireless (IEEE Press and John Wiley & Sons, 2005), Physics of Multiantenna Systems and Broadband Adaptive Processing (John Wiley & Sons, 2007), Parallel Solution of Integral Equation-Based EM Problems in the Frequency Domain (IEEE Press and John Wiley & Sons, 2009), and Time and Frequency Domain Solutions of EM Problems Using Integral Equations and a Hybrid Methodology (IEEE Press and John Wiley & Sons, 2010).

Dr. Sarkar is a Registered Professional Engineer in the State of New York. He received the College of Engineering Research Award in 1996 and the Chancellor’s Citation for Excellence in Research in 1998 at Syracuse University. He was an Associate Editor for feature articles of the IEEE Antennas and Propagation Society Newsletter (1986-1988), Associate Editor for the IEEE Transactions on Electromagnetic Compatibility (1986-1989), Chairman of the Inter-commission Working Group of International URSI on Time Domain Metrology (1990–1996), distinguished lecturer for the Antennas and Propagation Society from (2000-2003), Member of Antennas and Propagation Society ADCOM (2004-2007), on the board of directors of ACES (2000-2006), vice president of the Applied Computational Electromagnetics Society (ACES), a member of the IEEE Electromagnetics Award board (2004-2007), and an associate editor for the IEEE Transactions on Antennas and Propagation (2004-2010). He is currently on the editorial board of Digital Signal Processing – A Review Journal, Journal of Electromagnetic Waves and Applications and Microwave and Optical Technology Letters. He is the chair of the International Conference Technical Committee of IEEE Microwave Theory and Techniques Society # 1 on Field Theory and Guided Waves.

He received Docteur Honoris Causa both from Universite Blaise Pascal, Clermont Ferrand, France in 1998 and from Politechnic University of Madrid, Madrid, Spain in 2004. He received the medal of the friend of the city of Clermont Ferrand, France, in 2000.

Physics of Multiantenna Systems and their Impacts on Wireless Systems

The objective of this presentation is to present a scientific methodology that can be used to analyze the physics of multiantenna systems. Multiantenna systems are becoming exceedingly popular because they promise a different dimension, namely spatial diversity, than what was available to the communication systems engineers: The use of multiple transmit and receive antennas provides a means to perform spatial diversity, at least from a conceptual standpoint. In this way, one could increase the capacities of existing systems that already exploit time and frequency diversity. In such a scenario it could be said that the deployment of multiantenna systems is equivalent to using an overmoded waveguide, where information is simultaneously transmitted via not only the dominant mode but also through all the higher-order modes. We look into this interesting possibility and study why communication engineers advocate the use of such systems, whereas electromagnetic and microwave engineers have avoided such propagation mechanisms in their systems. Most importantly, we study the physical principles of multiantenna systems through Maxwell’s equations and utilize them to perform various numerical simulations to observe how a typical system will behave in practice. There is an important feature that is singular in electrical engineering and that many times is not treated properly in system applications: namely, superposition of power does not hold.

Consider two interacting plane waves with power densities of 100 and 1 W/m2. Even though one of the waves has only 1% of the power density of the other wave, if the two waves interfere constructively or destructively, the resulting variation in the received power density is neither 101 nor 99 W/m2, but rather is 121 or 81 W/m2, respectively. [constructive: 121= destructive: 81= ]. Hence, there is a 40% variation. Using power additively leads to a significant error in the received power density. The key point here is that the fields or amplitudes can be added in the electrical engineering context and NOT the powers. This simple example based on Maxwellian physics clearly illustrates that the materials available in standard text books on Wireless Communication which claims that “an M-element array, in general, can achieve a signal-to-noise ratio improvement of 10log10(M) in the presence of Additive White Gaussian Noise with no interference or multipath over that of a single element” has to be taken with a big grain of salt. Hence, we need to be careful when comparing the performance of different systems in making valued judgments. In addition, appropriate metrics which is valid from a scientific standpoint should be selected to make this comparison. Examples will be presented to illustrate how this important principle impact certain conventional way of thinking in wireless communication.

Also, we examine the phenomenon of height-gain in wireless cellular communication, and illustrate that under the current operating scenarios where the base station antennas are deployed over a tall tower, the field strength actually decreases with the height of the antenna over a realistic ground and there is no height gain in the near field. Therefore, to obtain a scientifically meaningful operational environment the vertically polarized base station antennas should be deployed closer to the ground. Also, when deploying antennas over tall towers it may be more advantageous to use horizontally polarized antennas than vertically polarized for communication in cellular environments. Numerical examples are presented to illustrate these cases.

We next look at the concept of channel capacity and observe the various definitions of it that exist in the literature. The concept of channel capacity is intimately connected with the concept of entropy, and hence related to physics. We present two forms of the channel capacity, the usual Shannon capacity which is based on power; and the seldom used definition of Hartley which uses values of the voltage. These two definitions of capacities are shown to yield numerically very similar values if one is dealing with conjugately matched transmit-receive antenna systems. However, from an engineering standpoint, the voltage-based form of the channel capacity is more useful as it is related to the sensitivity of the receiver to an incoming electromagnetic wave. Furthermore, we illustrate through numerical simulations how to apply the channel capacity formulas in an electromagnetically proper way. To perform the calculations correctly in order to compare different scenarios, in all simulations the input power fed to the antennas needs to remain constant. Also conclusions should not be made using the principles of superposition of power. Second, one should deal with the gain of the antennas and not their directivities, which is an alternate way of referring to the input power fed to the antennas rather than to the radiated power. The radiated power essentially deals with the directivity of an antenna and theoretically one can get any value for the directivity of an aperture but the gain is finite. Hence, the distinction needs to be made between gain and directivity if one is willing to compare system performances in a proper way. Finally, one needs to use the Poynting’s theorem to calculate the power in the near field and not exclusively use either the voltage or the current. These restrictions apply to the power form of the Shannon channel capacity theorem. The voltage form of the capacity due to Hartley is applicable to both near and far fields. Use of realistic antenna models in place of representing antennas by point sources further illustrates the above points, as the point sources by definition generate only far field, and they o not exist in real life.

The concept of a multiple-input-multiple-output (MIMO) antenna system is illustrated next and its strengths and weaknesses are outlined. Sample simulations show that only the classical phased array mode out of the various spatial modes that characterize spatial diversity is useful for that purpose and the other spatial modes are not efficient radiators. Finally, how reciprocity can be used in directing a signal to a preselected receiver when there is a two way communication between a transmitter and the receiver even in the presence of interfering objects is demonstrated. This embarrassingly simple method based on reciprocity, is much simpler in computational complexity than a traditional MIMO and can even exploit the polarization properties for effectively decorrelating multiple receivers in a multiple-input-single-output (MISO) system.

An Overview of Ultrawideband Antennas

Conventionally, the design of antennas is narrowband and little attention is paid to the phase responses of the devices as functions of frequency. Even the use of the term broadband is misleading as one essentially takes a narrow band signal and sweeps it across the band of interest. In fact, it is not necessary to pay too much attention to the phase for narrowband signals, as the role played by the frequency factor is that of a scalar multiplier. However, if one now wants to use multiple frequencies and attempts to relate the data obtained at each frequency, then this frequency term can no longer be ignored. Depending on the application, this scale factor can actually have significant variations, which also depend on the size and the shape of the bandwidth over which the performance of the system is observed. In the time domain, the effect of this frequency term creates havoc as it provides a highly nonlinear operation and hence must be studied carefully. By broadband we mean temporal signals with good signal integrity. When it comes to waveform diversity, which implicitly assumes time-dependent phenomena, it is not possible to do any meaningful system design unless the effects of the antennas are taken into account. These effects will be illustrated in terms of the responses of the antennas and on the applicability of the current popular methodology of time reversal for the vector electromagnetic problem.

To provide a background, notions of bandwidth will be discussed, especially in light of recent interest in ultrawideband (UWB) systems – the last decade has witnessed significantly greater interest in UWB radar. In that time, more than 15 nonmilitary UWB radars have been designed and fielded, which include applications to forestry, detecting underground utilities, and humanitarian demining. Since an antenna is an integral part of sensing systems, selected highlights of UWB antenna development will be very briefly summarized. It will also be illustrated how to design a discrete finite time domain pulse under the constraint of the Federal Communications Commission (FCC) ultra-wideband (UWB) spectral mask. This pulse also enjoys the advantage of having a linear phase over the frequency band of interest and is orthogonal to its shifted version of one or more baud time. The finite time pulse is designed by an optimization method and concentrates its energy in the allowed bands specified by the FCC. Finally, an example is presented to illustrate how these types of wideband pulses can be transmitted and received with little distortion.

Some fundamental problems in studying concepts involving the responses of antennas in the time domain are related to our subconscious definition of reciprocity. In the frequency domain, reciprocity is related simply to the fact that the spatial response of the sensor in the transmit mode is EQUAL to the spatial response of the sensor in the receive mode at any frequency of interest. In the time domain, the spatial response of the sensor will be time dependent. Hence, both the transmit and the receive impulse responses of the sensor will be a function of azimuth and elevation angles. However, for a fixed spatial angle, the transmit impulse response is NOT EQUAL to the receive impulse response of ANY sensor. In fact, mathematically one can argue that the transmit impulse response is the time derivative of the receive impulse response for any sensor. One may then conclude that somehow reciprocity is violated through this principle. The important fact is that the product in the frequency domain results in a convolution in the time domain and that the reciprocity relationship is no longer a simple one. Even though the transmit impulse response is the time derivative of the receive impulse response, reciprocity still holds! The above principle now helps us in characterizing different sensors for different applications as their temporal responses are quite different.

As a first example, it will be shown that an electrically large wide-angle biconical antenna on transmit does not distort the waveform, whereas on receive it does an integration of the waveform for certain conditions. In contrast, a TEM horn antenna on transmit differentiates the input waveform, whereas on receive it does not distort the waveform. Use of such transmit/receive antennas can actually produce channels with practically no dispersion. Experimental results covering GHz bandwidth signals will be provided to illustrate these methodologies. Examples regarding the impulse responses of other types of antennas, like the century bandwidth antenna, impulse radiating antenna, and the like will also be presented.

Who Is the Father of Electrical Engineering?

The Electromagnetic community makes a living out of Maxwell’s name. However, yet very few researchers really know what actually Maxwell did! The goal of this presentation is to make the point that Maxwell was one of the greatest scientist of the last century and he could be called so even if he did not do any work on Electromagnetic Theory. Sir James Jeans pointed out: In his hands electricity first became a mathematically exact science and the same might be said of other larger parts of Physics. He did develop almost all aspects of Electrical Engineering and can unambiguously be called the father of electrical engineering even if he did not work on electromagnetics!. He published five books and approximately 100 papers.

To start with, as Sir Ambrose Fleming pointed out how Maxwell provided a general methodology for the solution of Kirchoff’s laws as a ratio of two determinants. Maxwell also showed how a circuit containing both capacitance and inductance would respond when connected to generators containing alternating currents of different frequencies. He developed the phenomenon of electrical resonance in parallel to acoustic resonance developed by Lord Rayleigh. Maxwell provided a simpler mathematical expression for the wave velocity and group velocity again when reviewing a paper by Rayleigh On Progressive Waves.

Maxwell showed that between any four colors an equation can be found, and this is confirmed by experiments. Secondly, from two equations containing different colors a third may be obtained. A graphical method can be described, by which after fixing arbitrarily the position of three standard colors that of any other color can be obtained by experiments. Finally, the effect of red and green glasses on the color-blind will be presented, and a pair of spectacles having one eye red and the other green was proposed by him as assistance to detect doubtful colors. He was the first to show that in color blind people, their eyes are sensitive only to two colors and not to three as in normal eyes. Typically, they are not sensitive to red. He perfected the ophthalmoscope initially developed by Helmholtz to look into the retina. At the point of the retina where it is intersected by the axis of the eye there is a yellow spot, called the macula. Maxwell observed that the nature of the spot changes as a function of the quality of the vision. The macular degeneration of the eye affects the quality of vision and is the leading cause of blindness in people over 55 years old. Today, the extent of macular degeneration of the retina is characterized by Maxwell’s yellow spot test. He also developed the fish eye lens to look into the retina with little trauma. He provided a methodology for generating any color represented by a point inside a triangle whose vertices represented the three primary colors that he chose as red green and blue. However, as he demonstrated other choices for the primary colors are equally viable. The new color is generated by mixing the three primary colors in a ratio determined by the respective distance of the point representing the new color from the vertices of the triangle. In the present time, this triangle is called a chromatist diagram used in various movie and TV studios, and differs in details from his original. Maxwell asked Thomas Sutton (the inventor of the single lens reflex camera) to take the first color photograph of a Tartan ribbon. The experimental set up was to take three pictures separately using different colors and then project the superposed pictures to generate the world’s first color photograph. Today, color television works on this principle, but Maxwell’s name is rarely mentioned.

He established the electrostatic units and the electromagnetic units to setup a coherent system of units and presented a thorough dimensional analysis and thus put dimensional analysis on a scientific footing which was discovered much earlier by Fourier and others. The ESU and the EMU system of units were later mislabeled as the Gaussian system of units. He also introduced the dimensional notation which were to become the standard of using the powers of Mass, Length and Time. He also showed that the relation between the two units electromagnetic units, ESU and EMU, has a dimension LT–1, which has a value very close to that of velocity of light. Later on, he also made an experiment to evaluate this number.

He used polarized light to reveal strain patterns in mechanical structures and developed a graphical method for calculating the forces in the framework. He developed general laws of Optical instruments and even developed a comprehensive theory on the composition of Saturn’s rings. He also created a standard for electrical resistance.

When creating his standard for electrical resistance, he wanted to design a governor to keep a coil spinning at a constant rate. He made the system stable by using the idea of negative feedback. He worked out the conditions of stability under various feedback arrangements. This was the first mathematical analysis of control systems. He showed for the first time that for stability the characteristic equation of the linear differential equation has to have all its roots with negative real parts.

He not only introduced the first statistical law into physics but also introduced the concept of ensemble averaging which is an indispensable tool in communication theory and signal processing. His work on the mythical creature termed Maxwell’s demon lead to the quantization of certainty and led to the introduction of the notion of information content. He was a co-developer of the concept of entropy with Boltzmann which was expounded by Leo Szilard and twenty years later after that by Claude Shannon as information theory.

He laid the basic foundation for electricity, magnetism, and optics. He introduced the terms “curl”, “convergence” and “gradient”. Nowadays, the convergence is replaced by its negative, which is called “divergence”, and the other two are still in the standard mathematical literature. His name is primarily associated with the famous four equations called Maxwell’s equations. The crux of the matter is that Maxwell did not write those four equations that we use today. Starting from Maxwell’s work, they were first put in the scalar form by Heinrich Hertz and in the vector form by Oliver Heaviside, who did not even have a college education! This is why Einstein used to call them the Maxwell-Hertz-Heaviside equations.

The goal of this presentation is to illustrate those points and finally what exactly did Maxwell do to come to the conclusion that light was electromagnetic in nature. In summary, the reason why Maxwell’s theory was not accepted for a long time by contemporary physicists was that there were some fundamental problems with his theory. Also, it is difficult to explain what those problems were, using modern terminology. As Maxwellian theory cannot be translated into familiar to the modern understanding because the very act of translation necessarily deprives it of its deepest significance and it was this significance which guided research. The talk will explain what was the problem and Maxwell really mean by the Displacement current!

He was the joint scientific editor of the 9th edition of Encyclopedia Britannica. There he provided an account of the motion of earth through ether. Maxwell suggested that ether could perhaps be detected by measuring the velocity of light when light was propagated in opposite directions. He had further discussions in a letter to David Peck Todd, an astronomer at Yale. Maxwell’s suggestion of a double track arrangement led A. A. Mickelson, when he was working under Helmholtz as a student, to undertake his famous experiments later on ether drag in 1880’s and the rest is history.

Maxwell always delivered scientific lectures for the common people using models. Even though Maxwell has influenced development in many areas of physical sciences and had started a revolution in the way physicists look at the world, he is not very well known, unfortunately, outside some selected scientific communities. The reasons for that will also be described, which perhaps may be embedded in his prolific writing of limericks, as we will see.

Solving Challenging Electromagnetic Problems using MoM and a Parallel Out-Of-Core Solver

The Method of Moments (MoM) is a numerically accurate method for electromagnetic field simulation for antenna and scattering applications. It is an extremely powerful and versatile general numerical methodology for discretizing integral equations to a matrix equation. However, traditional MoM with sub-domain basis function analysis is inherently limited to electrically small and moderately large electromagnetic structures because its computational costs (in terms of memory and CPU time) increase rapidly with an increase in the electrical size of the problem. The use of entire-domain basis functions in a surface integral equation may still be the best weapon available in today’s arsenal to deal with challenging complex electromagnetic analysis problems. Even though higher-order basis functions reduce the CPU time and memory demands significantly, it is still critical to maximize performance to be able to solve problems as large as possible.

Single processor computers of the past were generally too small to be able to handle the problems which need to be solved today. Single-core processors used the IA-32 (Intel Architecture) instruction set. The 32-bit architecture addressing restricts the size of the variable which can be stored in memory to a limit of 2 GB for most operating systems. The matrix size is therefore constrained to approximately 15,000X15,000 when the variable is defined in single precision and to approximately 11,000X11,000 when using double precision arithmetic. The IA-32 structure was extended by Advanced Micro Devices (AMD) in 2003 to 64 bits, although keeping the same nomenclature. A 64-bit computer, can theoretically address 263 9.2 million Terabytes directly, which corresponds to the solution of a 109 X 109 matrix in single precision or a 0.76*109 X 0.76*109 matrix in double precision, provided enough physical and virtual memory are available. Support of memory addressing is still limited by the bandwidth of the bus, the operating system and other hardware and software considerations. The price of large-size memory is very high and usually dominates the budget for the whole computer system for MoM EM solvers. Such a large amount of RAM is too expensive to be affordable for most research institutes.

In recent years, due to thermal issues, single-core processor technology has been abandoned for new developments in multi-core processors. CPU speed has been sacrificed for thermal efficiency, with multi-core processor speeds greatly reduced from their single-core predecessors, by as much as 50%. No longer will numerical codes be able to benefit from continually higher processing speeds. So how will numerical codes advance in light of these technological changes? That is the crux of the research presented here.

How can we expand the capability and power of the numerically accurate MoM code to achieve the goal of solving a million by million unknown problem? Such a large sized problem would require 16 Terabytes of storage. Purchasing RAM of that size is not practical because of the cost. However, the large matrix can be written to the hard disk. Presently, hard disks can be quite large, say 500 GB to 1 TB each, and can be cascaded in RAID-0 configuration. The problem in writing to the hard disk is that it is typically considered to be much slower than writing to the RAM.

The parallel out-of-core integral equation solver can break through memory constraints for a computer system by enabling efficient accessing of hard disk resources and fast computational speed. It can run on a desktop with multiple single-core or multi-core processors by simply assigning one message passing interface (MPI) process to execute the software on each core. It can similarly be executed on high performance computing clusters with hundreds or thousands of servers, each with multiple multi-core CPUs. On a Beowulf network of workstations with 50 processors, a dense complex system of equations of order 40,000, may need only about two hours to solve. This suggests that the processing power of high performance machines is under-utilized and much larger problems can be tackled before run time becomes prohibitively large.

The main challenge is that if the parallelization of the code is not done appropriately for the hardware under consideration, the result will be an extremely inefficient code as the clock speed of future chips decreases with increasing number of cores. The key is to reduce latency and obtain proper load balancing so that all the cores are fully utilized. The research presented here provides a road map through the jungle of new technology and techniques. The map is illustrated with results obtained on an array of computational platforms with different hardware configurations and operating system software.

In the first step, parallelization of the impedance matrix generation should produce a linear speedup with increasing number of processes. If the parallelization is done in an appropriate fashion, this linear speedup will be evident. Distributing the computation of the elements of the matrix enables a user to solve very large problems in reasonable time. Since the matrix generated by MoM is a full dense matrix, the LU decomposition method used to solve the matrix equation is computationally intensive when compared with the read/write process of accessing the hard disk I/O.

The second step for the solution of this distributed matrix stored on the various processors is how to implement the LU decomposition. The basic mechanism of an out-of-core (OOC) solver is to write the large impedance matrix onto the hard disk, read into memory a part of it for computation, and write the intermediate results back to the hard disk when the computation of the part is done. When a parallel OOC is used as a solver, each out-of-core matrix is associated with a device unit number, much like the familiar Fortran I/O subsystem.

Even though the servers may be connected by 10/100M network adapters in a cluster, over 75% parallel efficiency still can be achieved. Examples will be presented to illustrate the solution of extremely large problems, such as analyzing targets in their natural environments. Comparison with experimental data will be shown to demonstrate that accuracy of the solution is maintained in solving electrically very large and complex problems.

This talk will describe the various principles involved in using MoM with higher-order basis functions, and using parallel out-of-core algorithms for the solution of complicated EM problems encountered in the real world. The talk will start by explaining a load balanced parallel MoM matrix filling scheme by using MPI virtual topology. The principles of ScaLAPACK in an out-of-core scenario will be described and illustrated such that even for extremely large problems the penalty for using an out-of-core solver over an in-core one may be of the order of 30%. This is in contrast to the long execution time encountered when using the built-in virtual memory mode of operating systems. Finally, plots will be presented to illustrate the principle of load balancing to keep the communication between cores to a minimum, resulting in a highly efficient code which is capable and flexible enough to be used on a single desktop PC, a collection of workstations, or a high performance computing cluster.

The combination of MoM accuracy, hard disk capacity and parallel OOC solver efficiency results in a powerful tool capable of handling the challenging demands of 21st century antenna and radar communities.

An Exposition on the Choice of the Proper S-Parameters in Characterizing Devices Including Transmission Lines with Complex Reference Impedances and a General Methodology to Compute them

The purpose of this paper is to demonstrate that a recently published paper dealing with little known facts and some new results on transmission lines is due to an incomplete interpretation of the nonphysical artifacts resulting from a particular mathematical model for the S-parameters. These artifacts are not real and do not exist when a different form of the S-parameters are used. Thus, the first objective of this paper is to introduce the two different types of S-parameters generally used to characterize microwave circuits with lossy characteristic impedance. The first one is called the pseudo-wave, an extension of the conventional travelling wave concepts, and is useful when it is necessary to discuss the properties of a microwave network junction irrespective of the impedances connected to the terminals. However, one has to be extremely careful in providing a physical interpretation of the mathematical expressions as in this case the reflection coefficient can be greater than one, even for a passive load impedance and the transmission line is conjugately matched. Also, the power balance cannot be obtained simply from the powers associated with the incident and reflected waves. The second type of S-parameters is called the power wave scattering parameters. They are useful when one is interested in the power relation between microwave circuits connected through a junction. In this case, the magnitude of the reflection coefficient cannot exceed unity and the power delivered to the load is directly given by the difference between the powers associated with the incident and the reflected waves. Since this methodology deals with the reciprocal relations between powers from various devices this may be quite suitable for dealing with a pair of transmitting and receiving antennas where power reciprocity holds. This methodology is also applicable in network theory where the scattering matrix of a two port (or a multiport) can be defined using complex reference impedances at each of the ports without any transmission line being present, so that the characteristic impedances become irrelevant. Such a situation is typical in small signal microwave transistor amplifiers, where the analysis necessitates the use of complex reference impedances in order to study simultaneous matching and stability. However, for both the definition for the S-parameters, when the characteristic impedance or the reference impedance is complex, the scattering matrix need not be symmetric even if the network in question is reciprocal.

The second objective is to illustrate that when the characteristic impedance of the line or the reference impedances in question is real and positive, then both of them provide the same results. Finally, a general methodology with examples is presented to illustrate how the S-parameters can be computed for an arbitrary network without any a priori knowledge of its characteristic impedance.