Proceedings of

The Third VLSI Design and Test Workshops 1999

New Delhi, India

August 20-21, 1999

 

To promote applications and research related to all aspects of VLSI

 

 

 

Cosponsored By

VLSI Society of India

IEEE Computer Society Technical Council on Test Technology

IEEE Computer Society Technical Committee onVLSI

 

 

 

 

With Support From

Indian Institute of Technology, Delhi

Cadence India

Philips Semiconductors, India

Texas Instruments, India

Message from Steering Committee

The VLSI Workshops is a very significant event. The purpose of the workshops is to sow the seeds of ideas, which will develop into the future products and technologies. It is not one, but a set of three workshops running concurrently. The idea is to provide a complete picture and let an individual benefit most and contribute best.

The workshops are sponsored jointly by the IEEE Computer Society, VLSI Society of India and the DOE of the Government of India. We must recognize the contribution of Prof. C. P. Ravikumar. He initiated this work in 1998 and has been the driving force behind it. This year’s workshop is the result of the year-long planning and tireless effort by him and his team. They must be congratulated.

We welcome you to these workshops and urge you to stay in touch. As you leave the meeting, start planning for the next year. You can expand on the work you discussed here into detailed papers for a conference or a journal, and get started on new ideas you just picked up.

The VLSI Design Conference Steering Committee, which overlooks the organization of the Workshops and VLSI Design Conference, thanks you for your participation.

Vishwani D. Agrawal (Chair), Bell Labs, USA

Anand Bariya, Cadence, USA

Srimat T. Chakradhar, NEC, USA

Asoke K. Laha, Interra, India

Yashwant K. Malaiya, Colorado State University, USA

Bobby Mitra, IT, USA

P. Pal Chaudhuri, Bengal Eng. College, India

Lalit M. Patnaik, IISc, India

Uday P. Phadke, DOE, India

A. Prabhakar, Datanet Corp., India

N. Ranganathan, Univ. of South Florida, USA

N. Venkateswaran, SSN College of Eng., India


Welcome

It is my unique pleasure to welcome you to the 3rd International VLSI Design and Test Workshops. This year, we continue to have three workshops in concurrent tracks: the Test workshop, the Logic Design workshop, and the Physical Design workshop. With the help of Bhargab Bhattacharya of ISI Calcutta and Anshul Kumar of IIT Delhi , I have put together the technical programs for the three workshops. You will find the final program in your registration kit. We have a total of 33 paper presentations, 6 tutorials, and one panel discussion. The workshop has grown since last two years. We have a complete 2-day program this year. Authors are being given 30 minutes of presentation time as opposed to 20 minutes in the past. Tutorials are 2 hours long, based on the feedback we received last year. We hope you will find the technical program of the workshop up to your expectations. Please fill out the feedback form in your registration kit and return it at the registration desk at the end of the workshop.

I take this opportunity to thank all the people who have helped me in organizing the workshops. Vishwani Agrawal has been constant source of inspiration. Dr A.Prabhakar, President of VLSI Society of India (VSI), and Dr. G.H. Sarma, Secretary of VSI, have been very cooperative in arranging for the sponsorship. Dr. Y. Zorian (President of IEEE TTTC) and Dr. N. Ranganathan (President of IEEE TC-VLSI) have been instrumental in obtaining cosponsorship for the workshop. Jassi Ahuja and Apurva Kalia of Cadence and Thomas Major of Philips Semiconductors (Bangalore) have arranged for financial assistance from their respective organizations for supporting student scholarships. Rohit Sharma, Mahesh Mehendale and Rubin Parekhji of Texas Instruments have helped me during the course of workshop organization in many ways. My faculty colleagues at IIT Delhi have given me invaluable support.

Along with Bhargab and Anshul, I thank all authors for their submissions, all the session chairs for their contribution, and all the members of the program committee who have helped us in reviewing the submitted abstracts. Our sincere thanks to Prof. Dinesh Bhatia, Hemendra Godbole, Apurva Kalia, , Dr. Rubin Parekhji, Prof. V.C. Prasad and Prof. Ranga Vemuri for agreeing to deliver the invited tutorials.

I cannot possibly name all the people who have contributed towards the organization of these workshops. I must record the invaluable help Vineet Sahula, my research student, has given me with local arrangements and registration. Dr. Paolo Prinetto, Dr. N. Ranganathan, and Dr. Y. Zorian have helped in publicizing the workshop. Mr. Ashwini Sharma of The Habitat World has made the banquet arrangements. Rohit Sharma has compiled the abstracts for publication on the Internet. The secretarial staff of the VDTT program of IIT Delhi has lent its support throughout. A number of M.Tech students of IIT Delhi have offered their help in the process of organization.

Once again, welcome to New Delhi and to the VLSI Design and Test Workshops.

 

 

C.P. Ravikumar

Organizing Chair

 

 


VLSI Test Workshop Proceedings

 

 

 

Program Chair:

C.P. Ravikumar

Indian Institute of Technology, Delhi

vsiSecy@vlsi-india.org

 

 

 

 

 

3rd VLSI Design and Test Workshops,

August 20-21, 1999

Habitat World, New Delhi, India

 

Test Sequence Generation with Cellular Automata

Prabir Dasgupta, Santau Chattopadhyay, and I. Sengupta

Computer Science and Engineering

IIT Kharagpur

{pdgupta,santanu,isg}@cse.iitkgp.ernet.in

ABSTRACT

Testing of sequential circuits requires that test patterns be applied in a specific sequence. On-chip test pattern generators often suffer from the problem that it requires incorporation of idle cycles between the test patterns. In this paper we present a scheme that can generate any given sequence of test patterns using cellular automata (CA) and some associated circuitry without any inserted cycles. This also results in up to 95% reduction in memory management over the direct storage of patterns. Moreover, regular, modular, and cascadable structure of CA with local interconnections make the scheme ideally suited for VLSI implementation.

 

 

Test Simulation Flow for Mixed Signal ICs

Craig Force and Amit Premy

Texas Instruments

Force@ti.com, amitpremy@india.ti.com

ABSTRACT

 

With time-to-market becoming a key factor in the success of any IC product, lot of stress is being put on integrating test development with device simulation so that the test solutions are ready before the actual silicon comes out.

Texas Instruments has worked with EDA (Analogy and Cadence) and ATE (Teradyne) companies to develop a Mixed Signal Test Simulation Flow which facilitates circuit simulation of the target ATE, Teradyne's A5XX mixed signal tester, and the device under test (DUT). The exchange of events between ATE and design models take place in the common Saber-Verilog mixed signal simulation environment wherein simulations are controlled from the test program which gets executed on Image, Teradyne's tester control software. Simulation results are available for analysis in both ATE debug tools and design simulation tools. The analog partitions of both ATE and DUT use Saber as the simulator, which uses MAST for modeling analog components, whereas digital partitions are simulated using Verilog-XL.

Image Exchange module, developed by Teradyne, specifically for exchanging events for test simulations, generates time stamped event record of the tester hardware bus as the program executes and these are passed through Verilog to be converted into design simulator stimulus. Design simulation results are applied as tester hardware bus events to Image, which further applies them to the test program variables and the debug windows.

This flow was used to develop the test program for a 12-bit, 200KSPS ADC. All the digital patterns where debugged using this flow. This helped in reducing the test program debug time on tester from eight weeks to six weeks, a saving of about 25 percent.

This flow was used to develop the test program for a 12-bit, 200KSPS ADC. All the digital patterns where debugged using this flow. This helped in reducing the test program debug time on tester from eight weeks to six weeks, a saving of about 25 percent.

 

A BIST for Detecting Multiple Stuck-open and Delay Faults by Transition Counts

Hafijur Rahaman Debesh K. Das Bhargab B. Bhattacharya

Dept. of Electrical Engg. Dept. of Comp. Sc. & Engg. ACM Unit

  1. P. C. Roy Polytechnic Jadavpur University Indian Statistical nstitute

Calcutta- 700 032, India Calcutta- 700 032, India Calcutta- 700 035, India

debeshd@hotmail.com bhargab@isical.ac.in

 

The main problem in designing BIST for stuck-open and delay faults is that detection of such faults requires two-pattern tests. Furthermore, a two-pattern test may be invalidated due to arbitrary delays in the circuit. A two-pattern test that cannot (can) be invalidated is called robust (non-robust). Our scheme detects only robustly detectable faults. We consider two-level AND-OR or NAND-NAND designs of combinational circuits. However, for detection of stuck-open faults, we may also consider any single complex cell design. It is known that single-input-change (SIC) pairs are sufficient to detect robust-detectable path delay faults or stuck-open faults in such designs. It can be noted that a multiple path-delay fault consisting of a set of robust-detectable single path delay faults may remain undetectable in an exhaustive approach. For example, if signature analyzer (SA) counts the syndrome (total number of 1’s), transition-counts, or any other signature in the output, a single path delay fault may cancel the effects of another fault in the signature. The similar phenomenon may happen when stuck-open faults occur simultaneously in both n- and p- transistors in CMOS designs. In our proposed design, we avoid this problem, by counting transitions at the output corresponding to the odd pairs of the input. That is, for an input sequence, we observe the transitions of the output, corresponding to the 1st to 2nd inputs, 3rd to 4th inputs, 5th to 6th inputs, and so on.

In our design, for an n-input CUT, TPG generates all n.2n SIC pairs by a sequence of 2n.2n vectors. SA observes the transitions at each odd pair. More specifically, SA checks whether there is any transition of 1 to 1 or 0 to 0 in the output response while applying the 1st and 2nd inputs, 3rd and 4th inputs, 5th and 6th inputs, and so on. Notice that SA does not observe the transitions in the output corresponding to 2nd to 3rd input, 4th to 5th inputs, 6th to 7th inputs, and so on. To implement this, SA also keeps track of the inputs and at every even pulse it counts the transitions. In the design of TPG, a 2n-bit circular shift register (SR) is used. Outputs of odd stages of (SR) are connected to n XOR gates to change one bit of a vector Xi produced by a n-bit binary counter (BC). The test pattern generator (TPG) generates a sequence Xi1, Xi, Xi2, Xi, Xi3, Xi,……., Xin, Xi, for every possible input Xi, where Xij differs from Xi only in j-th bit.

The scheme detects multiple stuck-open and path delay faults in two-level circuits by counting the number of transitions at the output of the CUT. TPG generates a sequence of length 2n.2n for an n-input CUT. Transitions are counted corresponding to inputs at odd pairs.

Design for Testability Issues in Reusable Cores

T. Ramesh, Philips Semiconductors, Bangalore, India

tramesh@blr.sc.philips.com

ABSTRACT

 

In traditional IC design, every circuit is designed more or less from scratch and Reuse is limited to standard-cell libraries, containing basic logic gates. This design style is being re-placed by one in which an IC consists of multiple, large, Reusable modules and also dedicated modules. These large, Reusable modules are called Cores; examples are CPUs and memories. Cores might come as hard(layout), firm(netlist), or soft(RTL).

Core-based design divides the design community into two groups: Core Providers and Core Users. Because short time-to-market is the driving force behind Core-based design, Cores often come with pre-computed tests for manufacturing defects. These tests can be based on various test methodologies, such as function test, scan test, or built-in self test. The Core user has to take care that suitable access paths from IC pins to embedded Core and vice versa exist, such that the pre-computed test patterns can be transported to and from the Core.

Designers (re)invented the reuse paradigm to speed up the IC design, it seems obvious to employ the same paradigm to speed up the test development. Just like the Core design is embedded into an IC design as it is for reducing the time to market, the tests for the core should also be developed in the same way. In general, development of high-quality tests requires a certain amount of design knowledge and often also involves design adaptations (design-for-test). Reuse of core-level tests will depend on the fault detection qualities of the core test as delivered by the Core provider.

 

 

 

Techniques for Improving Fault Coverage in Embedded Core Based Systems

Rubin A. Parekhji , Texas Instruments, India

parekhji@india.ti.com

ABSTRACT

 

The use of embedded cores in the rapid construction of systems on silicon has opened up new challenges in VLSI testing. An important one among them is the need to meet the high fault coverage goals in an embedded context. While conventional design for test techniques have addressed this problem for individual chip designs, in an embedded context, the fault coverage is influenced not only by the cores themselves, but also by the logic surrounding them, memories, and test interfaces.

This presentation discusses major techniques for enhancing the fault coverage of core based systems. It reviews the design components of these systems, how they impact the coverage, and techniques to improve the latter. Various testability enhancement techniques, based on the design methodology, preferred system configuration, fault grading of verification test cases, efficient generation of such tests for targetted faults and fault models, BIST, and fault analysis of undetected faults, are described. It is shown how each of these techniques and their combinations plays an important role in test of embedded core based systems, and how restrictive they are when considered in isolation.

These techniques have been evaluated on Texas Instruments’ new DSP core, TMS320C27xx, and devices using this core. Through a careful selection of these techniques, the fault coverage of the various modules built into devices using this core have been successfully raised, in an embedded context. These improvements have, in turn, also influenced the design of the core and the system themselves.

 

Testing Memory Designs

Srikanth Balasubramanian and V. Sridhar,

Philips Semiconductors, Bangalore, India

Sridhar.v@blr.sc.philips.com

 

ABSTRACT

The memory section of chips with embedded memory is more prone to defects in the production process than the other parts. Therefore, during design phase, special attention has to be given to the integration of memory in order to enable effective testing of whole chip. In designing testable memories and logic, the first step is to define the global test strategy. The four basic test methodologies available for memory are

* Scan Test

* Built-in Self Test (BIST)

* Multiplexer access

* Functional Testing

Before choosing the test methodology, certain aspects have to be considered like Fault coverage, test time, Area usage, Diagnostics, Protection, etc.. The importance of these items may vary greatly from one chip to another.

Out of above four test methodologies, Scan Test is the way to test random logic. The principle of Scan Test can also be used for geting access to the embeded memories in the circuit. Built-in Self Test is, in general, most effective for larger memories. The execution time is short compared to scan test approaches. The embedded memories can be accessed through multiplexers also, The speed of testing is high compared to scan test. One drawback of this test is that many pins are used as test pins. In the situation where all the memory signals are connected to chip pins the memory is accessilbe and testable from the external world. In such cases it is easy to test the memory functionally. In this paper we will discuss more about Scan test.

 

Tutorial on Analog Testing

V.C. Prasad, Electrical Engineering, IIT Delhi

Vcprasad@ee.iitd.ernet.in

ABSTRACT

 

Most systems on chip today contain both digital and analog components. Testing of analog circuits is therefore gaining considerable amount of importance. The tutorial will discuss techniques for testing analog circuits.

Tutorial on Verification of Synthesized RTL Designs

Ranga Vemuri, University of Cincinnati Cincinnati, USA

Ranga@ececs.uc.edu

ABSTRACT

 

This tutorial discusses a number of approaches to the verification of RTL designs generated by high-level synthesis systems:

1. Validation Based on Design Verification Testing and Simulation

This approach involves systematic generation of benchmarks, development of behavior-level design verification test generators ensuring desired degree of path coverage and the development of test-bench compilers to migrate behavior-level tests to RT level by performing suitable timing transformations.

2. Formal Verification of Synthesized Designs using Model Checking

Formal verification using model checking involves automated generation of temporal logic specifications as a byproduct of the synthesis process and the development of suitable temporal and spatial abstractions to automatically simplify the RTL designs in order to mitigate the state-explosion problem.

3. Incorporation of Correctness Proofs as Program Assertions

In this method, a formal proof of correctness of the high-level synthesis algorithms is established in a high-order logic theorem prover. The proof consists of a hierarchy of lemmas and theorems. These lemmas and theorems are then incorporated in the high-level synthesis software in the form of program assertions. When exercised successfully, these assertions guarantee that the RTL design synthesized is correct.

4. Automated Generation of HOL Specifications and Proof Scripts

Variable and state binding information generated by a high-level synthesis tool can be used to automatically produce a set of theorems that establish the conditions under which these bindings ensure RTL correctness and their proof scripts in high-order logic. These scripts can be executed using a higher-order logic proof checker resulting in completely automated verification of the RTL design.

Our own efforts in developing aspects these techniques have been reported at CAV’92, DAC’93, VLSI’95, FMCAD’96, ICCD’98, FMCAD’98, TPHOL’98, and DATE’98. Synthesis systems developed and validated using these technquies have been successfully used in implementing over a hundred ASIC and FPGA designs. Results and experiences in developing and implementing these techniques in successive generations of synthesis tools will be included in the presentation, in addition to the progress being made by various researchers in the community.

 

Tutorial on Verification of VLSI Systems

Apurva Kalia, Cadence Design Systems, Noida, India

Apurva@cadence.com

ABSTRACT

 

Verification of state-of-art electronic designs is fast turning into a major challenge in today’s electronic design processes. With current designs touching the 10 million transistor mark, logic as well as timing verification of these designs has become such a formidable task, that it is governed more by the time-to-market rather than by the completeness of the verification.

With the availability of massive amounts of silicon real estate, designers are pushing bigger and bigger designs into single chips. While this poses a big design challenge, the verification challenge is even bigger, since that grows at a much faster rate. Verifying systems-on-chips becomes more problematic as one realizes that traditional methods for verifying either systems or chips cannot be used for systems-on-chips, and that new solutions need to be created for handling the verification of such designs. The need to handle multi-source intellectual property further compounds the verification issues.

As event-based methodologies lose their relevance and applicability, new methodologies are emerging to handle the capacity and performance issues of today’s verification needs. Event based simulation technologies are being complemented with cycle simulation, formal verification and "informed/focused testing", to handle some of these new needs.

This tutorial aims at exploring the challenges of verification of systems and chips, and systems-on-chips, and the various methodologies that can be employed to address these challenges. We will be looking at certain specific problems associated with verification of SoC designs, and discuss new, emerging technologies and techniques to verify SoC designs.

 

 

Power-constrained Optimization of Test Plans

Gaurav Chandra, Ashutosh Verma, C.P. Ravikumar,

Electrical Engineering, IIT Delhi

Gauravc@synopsys.com, ashutosh_verma@hotmail.com, vsiSecy@vlsi-india.org

ABSTRACT

 

With the growing complexity of systems-on-chip (SOC), testing them has emerged as a daunting problem. In this paper, we shall consider the problem of testing a system-on-chip without violating a user-specified power constraint. Power dissipation is an issue in testing of SOC due to several reasons. Since test vectors are generally applied in an uncorrelated manner, the power dissipation of an embedded core can be several times the power dissipation during normal mode of operation. When there are many embedded cores on the SOC, if their test sessions are not appropriately scheduled, the power dissipation can exceed the power rating of the chip. We provide a greedy algorithm for scheduling test sessions. The objective of our algorithm will be to minimize the test application time. We present results on two example designs.

 

 

Recursive Pseudoexhaustive Test Pattern Generation with Cellular Automata

Prabir Dasgupta, Santau Chattopadhyay, and I. Sengupta

Computer Science and Engineering

IIT Kharagpur

{pdgupta,santanu,isg}@cse.iitkgp.ernet.in

ABSTRACT

This paper presents a recursive technique for generation of pseudoexhaustive test patterns. The scheme is optimal in the sense that first 2k vectors cover all adjacent k-bit spaces exhaustively. It requires substantially lesser hardware than the existing methods, and utilizes the regular, modular, and cascadable structure of local neighborhood cellular automata (CA) that suits ideally for VLSI implementation. In terms of XOR gates, this approach outperforms earlier methods by 15-50%. Moreover, testing methodology and hardware requirement have been established analytically, rather than by simple simulation and logic minimization.

 

Design Tradeoffs for Test of Embedded Cores

Rubin A. Parekhji, Texas Instruments (India) Ltd.

Parekhji@india.ti.com

ABSTRACT

 

The use of embedded cores in the rapid construction of systems on silicon has opened up new challenges in VLSI testing. Important ones among them include the need for a high fault coverage, reduction in the test overhead and test cost, and improvement in the test quality in an embedded context. While each of these requirements have been addressed previously for individual chip designs, conventional design for test techniques have to be suitably modified for adapting them for test of such embedded cores, and core based systems.

This presentation evaluates the impact and effectiveness of various test methodologies for embedded cores in terms of the test coverage, test quality and test cost. Various test quality and cost metrics are reviewed. Situations wherein different techniques can be profitably employed are identifed, along with corresponding design and test constraints. It is shown how the choice of the test methodology, targetted fault models, test generation tools, and test automation techniques, impact the test goals for embedded cores.

Representative results depicting these tradeoffs are obtained through various experiments on Texas Instruments’ new DSP core, TMS320C27xx. These are presented, and their impact on the overall test cost and test quality explained. These results highlight the various design tradeoffs that exist for test of embedded cores, and preferred alternatives.

 

On-Chip Characterization and Debugging Methodology for High Speed

Embedded Memories

Anand Hardi, Anil Kalra, Balwant Singh, Santosh, Shamsi Azmi

STMicroelectronics, India

Balwant.Singh@st.com

 

ABSTRACT

 

Today any new design takes approx. 20% time in design and 80% in its verification (including silicon verification) and on the other hand technology is shrinking at a rate that pushes existing ATE (Automatic Test Equipmen) to their performance limits for Timing accuracy, Speed and Number of channels.

Embedded memories being the major component of any design , occupying sometimes upto 70-80% of total chip area , needs more focus for the problem of On-chip characterising functionality , access time, setup time and maximum operating frequency measurements. On-chip BIST (Built In Self Test) is used widely for functional characterisation , which is usually done at less than the targeted operating frequency because of ATE/PAD operating frequency limitati -ons. On the other hand characterisation of access time for embedded memories is difficult off-chip and that too with good accuracy.

This paper on On-chip BISC(Built in self Characterizer) highlights on reducing Silicon characterisation time while Characterising Memories for Functionality , Access time , Setup time and Maximum operating frequency of the embedded memories with improved accuracy and eliminating ATE dependency .The proposed idea can reduce the Testing time by 60%.

 

 

Processor Emulation Design and Verification

Rubin A. Parekhji, Texas Instruments (India) Ltd.

Parekhji@india.ti.com

ABSTRACT

 

Emulation has become a niche component in the design of today’s processors on account of the increasing complexity of the logic therein, and the demanding applications that they are being targeted to. While in-circuit emulation has traditionally been used, the approach has suffered from various limitations in present day designs. An important requirement, which has emerged therefrom, is the need to design in emulation logic into the processor itself, to support a new self-emulation mode of operation.

While on one hand, the emulation logic provides more visibility into the processor operation and facilitates application development and debug, on the other, it introduces new design and verification concerns about on account of its interaction with the main central processing unit. This, in turn, significantly impacts both design and verification of the processor itself.

This tutorial discusses emulation design and verification for a digital signal processor (DSP) core and DSP system. The emulation paradigm is explained. Generic emulation methodologies are reviewed, and design techniques for supporting self-emulation described. It is shown how design for emulation significantly impacts the verification and test of the processor core itself. Design and verification costs incurred are explained, and techniques and tradeoffs for their minimization described. These techniques have been effectively employed in the design of Texas Instruments' new DSP core, TMS320C27xx, which has been designed in India.

 

 

 

 

 

Logic Design Workshop Proceedings

 

 

 

Program Chair:

Anshul Kumar

Indian Institute of Technology, Delhi

Anshul_kumar@hotmail.com

 

 

 

 

 

3rd VLSI Design and Test Workshops,

August 20-21, 1999

Habitat World, New Delhi, India

 

Capability Maturity Model for IC Design - What Should It Be?

Sushil Sinha and Mahesh Mehendale, Texas Instruments (India) Limited

{sushil,mhm}@india.ti.com

ABSTRACT

Semiconductor Industries Association has reported that improvements in processing and manufacturing technology have increased leading-edge product design complexity at 58% compound annual growth rate, but processor design productivity has improved at only 21% compound annual growth rate, resulting in a growing gap between the designers’ techniques and increasingly more complex design. IC design is moving towards ‘System-on-a-Chip’. The widening gap between IC design complexity and design productivity, do force us to look for a framework that can be employed purposefully for such a problem.

Since IC design doesn’t have a unified framework of its own, one has to look for available framework in other domains. In software engineering, Capability Maturity Model (CMM) from Software Engineering Institute (SEI) is very promising. Taking advantage of the work done by pioneers in the area of Total Quality Management, a five level CMM has been developed that establishes the project management and engineering foundation in the initial stages and quantitative control of processes at higher levels of maturity. The maturity framework was first inspired by the theme of ‘Quality Management Maturity Grid’ based on the book "Quality is Free" by Philip Crosby.

Extended Signal Flow Graph Model for VLSI Design Processes

Vineet Sahula, and C.P. Ravikumar

Electrical Engineering, IIT Delhi

Sahula@ee.iitd.ernet.in, rkumar@ee.iitd.ernet.in

 

ABSTRACT

There is a tremendous growth in the complexity of VLSI systems being designed today. To be cost effective and remain in competition, there is always a need to reduce duration of design cycle. Decision for design process improvement have been, traditionally based upon historical data and designers’ perceptions of critical sections of design process-flows.

Since design processes also continue to increase in complexity, it becomes mandatory to base process improvement decisions on quantitative analysis. We propose an analytical technique for modeling of concurrent design processes. We call it Extended Signal Flow Graph ( ESFG ) technique.

It has some extended features over existing Signal Flow Graph ( SFG ) technique. Our approach is capable to capture concurrent behavior of design process flows and unlike SFG technique, is able to provide total completion time for a design process. SFG technique when applied to model a concurrent process, can only provide with total efforts in design process, instead of process completion time. We illustrate our approach by implementing it for a abstract design flow for wireless mobile transceiver design.

Architectural Exploration Using Advanced C Models

Sirisha Voruganti and Md.Saif Khan,

Philips Semiconductors, Bangalore, India

Sirisha.voruganti@blr.sc.philips.com

ABSTRACT

 

The present day scenario of a system design requires an implementation that blends ASIC chips with processors, memory and other special modules like multimedia and DSP modules. Processors are the heart of any information processing system, and the frequent release of microprocessors affects many strategic industrial decisions. Thus the design of well balanced, long lasting processors is extremely important. There are a number of issues in processor design that require consideration while exploring new architectures, like speed, area, cost and interfacing considerations. The key to optimizing a design’s performance is the availability of fast and accurate hardware simulation models early in the design cycle. For embedded systems-on-chip design, using hardware prototypes for performance evaluation is not practicable. It is under this scenario that simulation models find wide usage. These models provide feedback on the functionality and performance of alternative design approaches.

Design aspects including hardware architecture and algorithm efficiency can all be effectively analyzed and optimized if the right kind of models can be obtained early enough in the design. Our work addresses the needs of system architects, processor architects and embedded software developers by providing capability to carry out performance evaluation of critical parts of a system within an environment, like TSS. TSS (Tool for System Simulation) is a framework supporting simulation of C models within Philips. TSS also integrates with leapfrog for C-VHDL and C-Verilog co-simulation. This environment may in some cases offer a low cost alternative to expensive emulation boxes. Since the simulation is cycle accurate, the performance improvement in terms of simulation time is considerable over the traditional VHDL simulation.

 

Tutorial on Timing Closure in System on Chip Designs

Hemendra Godbole, Synopsys, USA

Hemen@synopsys.com

 

ABSTRACT

Deep submicron SoC designs integrate numerous design styles. A rapid and accurate timing convergence has become a key competitive advantage for these designs. Shorter design cycles compel high-level abstractions while the physical-effects in the sub-0.25m domain create a pull towards transistor-level analysis and modeling. The issue of interconnect v/s gate-level capacitance is adding a greater complexity in the design planning stages, while newer technologies like SOI, mixed logic-memory processes and Copper-interconnects further challenge accurate modeling and hence, accurate timing convergence.

This tutorial presents a perspective ton achieving timing convergence for a mixed gate/transistor-level design. Block-level timing issues will be discussed for logic styles ranging from static, to domino to mixed analog-digital blocks. Appropriate usage of dynamic and static timing analysis techniques will be illustrated to create portable timing analysis techniques will be illustrated to create portable timing modules into the full-chip analysis. Full-chip timing issues like CrossTalk, ClockSkew, Design-budgeting RC-Modeling and IP-reuse will then be added to illustrate techniques used by leading-edge designers to achieve timing convergence at the Full-Chip levels.

Organization of Slides

I. Timing Closure for SoC designs

II. SoC requirements for timing closure

III. Chip-perspective

  1. Full-Chip (P&R, budgets, routing, clk-tree, parasitics)
  2. Block (EVR, sub-partitions, gate/trans, static/dynamic, BVR)
  3. Full/Block iterations, top-five methods of fixing violations :

D. Analysis, Extraction, Optimization, Verification, Modeling

 

IV. EDA perspective

V. Challenges ahead

VI. KEY takeaways for SoC designers

1. Marry your methodology into the EDA flows based on your tradeoffs for accuracy/runtime/capacity

2. Logic & Ckt. partitioning to stay flexible

3. More Analog, more custom content. Analog-IP is a myth !

4. STA is the way to go, with Dynamic-on-Demand

5. "Closure" not "Analysis" is the key; less scripts is better.

6. New business models could mean iSolutions; smaller blocks is better

7. "Work at gate-levels, performance at transistor-levels"; models for IP-reuse is gaining in importance

8. Do not throw your "EE/circuit-design" books away.. yet !

9. No "estimates" in the new world; accuracy for CTX and CSO is key !

10. Review your RTL with a circuit-team before you encode your block.

 

 

RTL Design of a Small Memory

  1. Suresh and R.Harinath Texas Instruments India Ltd.

{bsuresh,han}@india.ti.com

ABSTRACT

 

Present day trends show an increased usage of embedded RAMs in ASIC design, their configurations ranging from few words to thousand words and datawidths varying from one to few hundreds. These memory requirements are usually met by compiler memories(Embedded SRAM). Compiler memories are software generated - the base cell [storage element] being replicated based on user’s input of number of words and wordlength. These compiler memories are usually designed for larger configurations. They are not optimized for lower range (say 5 words) due to the area overhead associated with the control logic circuits. In order to address the very-low end configurations, register files can be designed, but time taken to design a register files is very high. In order to address some of these problems with compiler memory usage, we propose a methodology of using synthesizable RAMs for very small configurations. Synthesizable RAM is register transfer level (RTL) description of memory in Hardware Description Language (HDL), which are functionally equivalent to compiler memory. This HDL can be translated into hardware gates and mapped into a specific technology node by any synthesis tool like Design Compiler™ from Synopsys, Build-Gates™ from Ambit etc. The RTLs are designed to ensure that the synthesized netlist meets area/timing requirements. The talks first give the brief description of design constraints for Synthesizable RAM. Afterwards, various solutions are evaluated by first looking into third party solutions. A single port memory architecture is explained and how this architecture can be changed for low power, performance and testability. It also illustrates the influence of ASIC library with the quality of results. Finally we conclude with some simple experimentation by taking two-port RAM as example. The main advantages of these synthesizable memories are reduced area and increased design flexibility (in terms of meeting target performance and obtaining the desired physical configuration). Experimental results show that replacing lower end compiler macros with their synthesized counterpart can lead to a memory area reduction of up to 32% in a 800K gates ASIC design, while meeting all the timing requirements for the design.

Design and Results of a Hierarchical Megabit SRAM Compiler

Christophe Frey, ST Microelectronics, India

Christophe.Frey@st.com

ABSTRACT

 

A low voltage embedded single port memory generator implemented in a 6 metals, 0.18um standard CMOS process is described. The typical (8kx16) cut is achieving 300Mhz maximum frequency, with a 3.3ns access time at 1.3V and 25ºC and a typical power of 180uW/Mhz at 1.3V. Special care has been taken to reduce the standby current. The hierarchical wordline architecture, and a differential output bus allow low power characteristics. At the same time high speed is reached, especially thanks to a novel dynamic wordline decoder.

Power Constrained Test Scheduling For Embedded Random Access Memories

G.Sreenivas, CYPRESS Semiconductors, India

Xgs@cypress.com

 

ABSTRACT

 

Memory is an important component of computers along with central processing unit and input-output devices. As the number of memory chips is significant in the total number of chips in a computer, to ensure that the overall system functions properly, it is necessary to make sure that each chip in the memory subsystem is functionally correct. Thus testing of memory chips has become compulsory.

A memory chip in general consists of three blocks: Memory cell array, Address decoder, and Read-Write logic. In an embedded memory chip all these three blocks will be integrated onto a single chip. In any CMOS circuit power dissipation depends on the switching activity in the circuit, load capacitance and voltage swing at the output node. In this work, power dissipation during testing of a memory chip has been estimated for four traditional test methods and for pseudo random testing.In addition to the general architecture, devided word-line architecture is also considered as a low power alternative. For each of the test methods and architectures, expressions are derived for transition density, which is a measure of switching activity and load capacitances associated with each block.These expressions are general w.r.t memory size. Then average power dissipation is estimated for each block which are summed up to get the overall chip power.

In testing it is desired to complete the testing of all chips with in minimum possible time. With this objective the most efficient solution is to conduct tests for all chips symultaneously. But, the power dissipation of a test session should not exceed the maximum power rating of the system under test. So, an optimum solution is to be extracted which can minimize the overall test time while satisfying the power constraints. A scheduling algorithm is developed which gives an optimum solution in all possible cases. This algorithm also considers the resource conflicts while scheduling tests. In case of built-in Self-test, the power dissipation in BIST hardware can also be considered while scheduling which gives an accurate solution.

 

A Methodology for Exploring the Area-Delay-Power Space for DSP

Mahesh Mehendale, Texas Instruments (India) Ltd.

Mhm@india.ti.com

S. D. Sherlekar, Silicon Automation Systems Ltd.

Sds@sasi.com

 

ABSTRACT

 

The paper presents a methodology for achieving the desired area-delay-power tradeoffs for the weighted-sum based DSP kernels, the building blocks of embedded, real-time DSP based SoCs. We propose a framework that encapsulates various algorithmic and architectural transformations and enables systematic exploration of the area-delay-power solution space of DSP algorithms for a given implementation style. The framework is based on a classification of the transformations into seven categories which exploit the unique properties of DSP algorithms and the implementation styles.

Here are the seven categories of transformations:

* Implementing data movement by moving the pointer to the data

* Data Flow Graph Restructuring Transformations

* Data Coding

* Transformations that Exploit Redundancy in the Computation

* DFG Transformations based on Mathematical Properties

* Exploiting Relationship between the Real Value Domain and the Binary Domain

* Transformations that Exploit Available Degrees of Freedom

We discuss the specific transformations for each of these categories in the context of five implementation styles and highlight how the desired area-delay-power tradeoff can be accomplished.

Here are the five implementation styles:

* Programmable processor based implementation:

* Implementation using hardware multiplier(s) and adder(s)

* Distributed Arithmetic based Implementation

* Multiplier-less Implementation

* Residue Number System based implementation

 

 

Low Power Micro-architectures for Programmable DSP processors

Amitabh Menon, Texas Instruments India

Amenon@india.ti.com

ABSTRACT

 

As battery operated environments become increasingly programmable, the power dissipation aspects of the embedded computing engines in these devices are a growing concern. A number of such devices employ programmable DSP processors due to the nature of computation involved in their function. Low power design techniques have been an active area of research for a number of years now and methods to reduce power have been proposed at all levels - from system to device. Of these a number of techniques work at the micro-architectural level to reduce power dissipation and even the energy requirements of typical computational loads. Reduction in the energy consumption has a direct impact of extending battery life and enables the use of smaller and lighter batteries that reduce the overall system cost.

This paper is a survey/review of the low power design techniques that focus on the micro-architectural level, in the context of DSP processors. The power dissipation in a microprocessor is concentrated in the memory subsystem, the arithmetic processing datapath, the instruction fetch logic, the register file and the instruction decode. Opportunities and techniques for power reduction in each of these units are reviewed and the tradeoffs involved are discussed. Some of these techniques are essentially software techniques with some hardware support so one may still consider them to be micro-architectural level optimizations.

The techniques covered for each major subsystem in a DSP are :

 

System-Level Considerations in Realizing Low Power DSP Applications

S.D. Sherlekar, Silicon Automation Systems

Sds@sasi.com

ABSTRACT

System-Level considerations are as important as --- and sometimes more important than --- circuit-level techniques for power optimisation. This paper begins by presenting various aspects of system-level optimisation techniques including algorithm selection and applications technology selection. Then it looks at various techniques for dynamic power management especially in the context of DSP and communication domains. Finally, a glimpse is presented on how asynchronous circuits can provide us a lead for the future.

 

Area Efficient Digital Waveform Generators

Rohit Sharma, Texas Instruments, India

Rsharma@ti.com

ABSTRACT

 

Digital waveform generators (DWAGs) are integral part of the modern communication system, such as TV [Xia], video monitors, mobile networks, and other applications where proper synchronization is necessary between transmitting and receiving end. While conventional goals such as minimal area and maximal performance continue to hold, additional constraints such as testing has to be considered. Generally it is seen that area constraint also cures the other two problems to a large extent. In this paper we present some design tips resulting in an easy testable and area efficient DWAGs, which is very difficult with the present day’s limited knowledge of circuit synthesizers. In a case study we show that it’s not impossible to get 103 to 105 time improvement [Hasan] in the area as compared to conventional methods, provided the desired waveforms are repeatable in nature.

PLL synthesized Motherboard Clock Generator: Architecture/Design Issues

Anil K. Gundurao and Kaushal Kumar Jha

{Akg,kkj}@cypress.com

 

ABSTRACT

A general architecture of Motherboard Clock generator is presented. Major subblocks like PLL, divider and output paths are described. Spread Spectrum technique to reduce EMI is discussed. Current Steered Differential output is presented to show how it reduces jitter.

 

 

 

 

Physical Design Workshop Proceedings

 

 

 

Program Chair:

Bhargab Bhattacharya

Indian Statistical Institute

Bhargab@isical.ac.in

 

 

 

 

 

3rd VLSI Design and Test Workshops,

August 20-21, 1999

Habitat World, New Delhi, India

 

 

 

Gate Resizing to Reduce Power Consumption

Edward Y.C. Cheng and S. Sahni, University of Florida

{yccheng,Sahni}@cise.ufl.edu

 

ABSTRACT

We study the problem of resizing gates so as to reduce overall power consumption while satisfying a circuit’s timing constraints. Polynomial time algorithms for series-parallel and tree circuits are obtained. Gate resizing with multigate modules is shown to be NP-hard.

 

 

Complex Triangle Elimination Problem and its Applications to VLSI

Parthasarathi DasGupta,

MIS Group, Indian Institute of Management, Calcutta, India

Partha@iimcal.ac.in

S. Bhattacharya, D. Pal, S. Pal, and U. Maulik

Computer Science and Technology, Kalyani Engineering College

ABSTRACT

A well known approach to floorplanning is based on rectangular dualization[L90]. This makes use of triangulated adjacency graphs with certain characteristics. It is well known that if the input adjacency graph to the floorplanning phase contains a cycle of length three that is not a face (complex triangle), transformation of the graph into a rectangular floorplan is impossible. Thus complex triangles (CT) have to be eliminated before the floorplanning phase. Complex triangle elimination (CTE) in weighted adjacency graphs has been shown to be NP-complete in [SY93]. [SY93] also proposes a method for CTE problem for graphs without any nested complex triangle. A well known approach to floorplanning is based on rectangular dualization[L90]. This makes use of triangulated adjacency graphs with certain characteristics. It is well known that if the input adjacency graph to the floorplanning phase contains a cycle of length three that is not a face ( complex triangle ), transformation of the graph into a rectangular florrplan is impossible. Thus complex triangles (CT) have to be eliminated before the floorplanning phase. Complex traingle elimination (CTE) in weighted adjacency graphs has beeen shown to be NP-complete in [SY93]. [SY93] also proposes a method for CTE problem for graphs without any nested complex triangle.

In our work, we consider the more general CTE problem for adjacency graphs with multiple levels of nesting of CTs. We prove the problem to be NP-hard by showing its polynomial-time reducibility from a well-known NP-complete [GJ78] problem. We use edge-covering technique for solving the problem.

The input to our problem is an unweighted adjacency graph G with complex traingles, and our goal is to find the minimum number of edges in G, removal of which will eliminate all the CTs in G. The input adjacency graph is mapped into a dual graph G’, each vertex of which corresponds to a CT, and two vertices have CTs have a common edge. We next employ a best-first search strategy which attempts to find a minimum set of cliques in G’ such that all vertices of G’ are covered. In each stage, a lower bound lb on the additional numberof edges in G for CT elimination is estimated. To compute lb, the graph G is preprocessed, by coalescing all CTs within some other CT into a single vertex, such that the reduced graph Gr has no nested CT. At a stage, once a clique in Gr is chosen, its corresponding edge in G is eliminated, and G is modified.

The proposed algorithm has currently been implemented on some randomly generated triangulated graphs using C/C++ on Pentium PC. Our work will have interesting applications to generate triangulated graphs which have rectangular floorplan realizations. An extension of this work would be rectangular graphs with nested complex 4-cycles, in which pseudo-vertices may have to be added to ensure slicibility [YS95].

References

[SY93] Y. Sun and K-H Yeap, "Edge covering of complex triangles in Rectangular dual floorplanning", Journal of Circuits, Systems and Computers, Nov. 1993.

[GJ79] M R Garey and G J Johnson, Computers and Interactability, W H Freeman and Co, 1979.

[YS95] K H Yeap and M Sarrafzadeh, " Slicealbe Floorplanning by Graph Dualization", SIAM Journal on Discrete Mathematics, May 1995.

[L90] Lengauer, Combinatorial Algorithms for Integrated Circuited, John Wiley and Sons, New York, 1990.

Ultra-Fast Placement for FPGAs

Dinesh Bhatia

Design Automation Laboratory

ECECS Department

P.O. Box 210030

University of Cincinnati

Cincinnati, OH 45221--0030, USA

ABSTRACT

Two-dimensional placement is an important problem for FPGAs. Current FPGA capacity allows one million gate equivalent designs. As FPGA capacity grows, new innovative approaches will be required for efficiently mapping circuits to FPGAs. We propose to use the tabu search optimization technique for the physical placement step of the circuit mapping process. Our goal is to reduce the execution time of the placement step while providing high quality placement solutions. In this paper we present a study of tabu search applied to the physical placement problem.

First we describe the tabu search optimization technique. Then we outline the development of a tabu search based technique for minimizing the total wire length and minimizing the length of critical path edges for placed circuits on FPGAs. We demonstrate our methodology with several bench mark circuits available from MCNC, UCLA, and other universities. Our tabu search technique has shown dramatic improvement in placement time relative to commercially available CAE tools (20$\times$), and it results in placements of quality similar to that of the commercial tools.

Dinesh Bhatia

Dinesh Bhatia received a Bachelor’s in Electrical Engineering from Regional Engineering College, Suratkal, India in 1985 followed by a MS and Ph.D. in Computer Science from University of Texas at Dallas in 1987 and 1990 respectively. His doctoral work was supported by ACM SIGDA scholarships.

He joined the Computer Science and Engineering Department at the Southern Methodist University

in Dallas in 1990 and moved to University of Cincinnati in 1991. Currently he is Associate Professor in the Department of Electrical and Computer Engineering and Computer Science at the University of Cincinnati. He also directs the Design Automation Laboratory and his research interests include all aspects of architecture and CAD for field programmable gate arrays, reconfigurable and adaptive computing, physical design automation of VLSI systems, applied graph theory, and algorithms. He has authored over fifty papers and has been invited to present tutorials at various conferences. He also served as a guest editor for a special issue on field programmable gate arrays (FPGAs) for the VLSI Design journal. He is an associate editor of IEEE Transactions on Computers. His research is actively supported by Defense Advanced Research Projects Agency (DARPA), the United States Air Force, and various industries. He is a member of IEEE, ACM, and Eta Kappa Nu.

Applications of Graph Theory to VLSI Design Problems

Rubin Parekhji, Texas Instruments (India) Ltd.

Parekhji@india.ti.com

ABSTRACT

 

One of the challenges in VLSI design comes from the numerous optimisation problems that have to be solved during the design cycle. As circuits become increasingly complex, the complexity of these problems only grows exponentially. Design automation tools help to automate and simplify this process at various stages of the design cycle, from specifications to layout, by implementing algorithms based on graph formulations. However, most of these problems involve a search over an exponential solution space, and hence cannot be solved in polynomial time. As a result, the tools are heuristic in nature and offer, at best, near optimal solutions.

This tutorial discusses the applications of graph theory to VLSI design, and the design automation tools used therein. Interesting properties of graphs, and standard formulations based on them are explained. Solution techniques using standard algorithms, heuristic approaches, and cost function minimisation are discussed. Through a set of representative examples, it is shown how graph theory provides an elegant means of addressing various combinatorial optimisation problems in simulation, synthesis, timing analysis, test, and physical design. As illustrations, it is shown how diverse problems can be similarly modelled, optimal solutions obtained, feasibility checks performed, and the entire solution space effectively traversed, using algorithms based on the structure and properties of regular graphs and sub-graphs.

No prerequisites are assumed. The emphasis in the tutorial is on problem solving using graph theory rather than specialised graph based algorithms.

.

 

A Physical Layout Efficiency Checker with an Emphasis on Die Area Reduction

Ganesh Kamath and Preetham Kumar, Texas Instruments India

{kamath,preetham}@india.ti.com

ABSTRACT

 

In a semiconductor industry it is important to win a customer by providing a product with the lowest cost and best performance. Lower the die size lesser the cost of the chip. Hence die size reduction is one of the major contributors for earning higher net revenue per wafer. To achieve die size reduction without sacrificing the performance there is a need for an efficient layout. An auto layout tool generates a module layout using library cells; the area efficiency reported by this solely depends on library cell area provided. If the library cell is not drawn with high efficiency the report may not be genuine. Hence there is a need for gauging the layout efficiency with higher accuracy for any given layout. There has to be a tradeoff between the performance of the chip and the die size when it is an analog layout. For digital layout as the logic is relatively high, the available area has to be utilized to a very high factor.

PHYSICAL LAYOUT EFFICIENCY CHECKER methodology speaks about reporting an overall efficiency of the drawn layout. Generally as the power routing is planned on the top most METAL layer, using this methodology the efficiency of power distribution can also be reported. More importantly a feedback mechanism is provided for under utilized area considering design and process constraints. Hence it provides a genuine solution to report the layout efficiency and reduce the die area, thus improving net revenue per wafer.

 

 

Decomposition of Finite State Machines for Area, Delay Minimization

Rupesh S. Shelar, Silicon Automation Systems

Rupesh@sasi.com

Madhav Desai and H. Narayanan, IIT Bombay

{madhav,hn}@ee.iitb.ernet.in

ABSTRACT

 

In this paper, we examine the state assignment problem as that of embedding a given finite state machine in the Kronecker product of two smaller machines. To construct the smaller machines, the states of the original machine are partitioned in two different ways such that the resulting partitions have their ‘meet’ as the singleton partition. We propose a cost function for this ‘decomposition’ which would reflect the cost of realization of the machine through the Kronecker product. The cost function is justified theoretically, using a model of multi-level implementation as well as empirically, on a particular benchmark. We describe a simple algorithm to minimize this cost function. We obtain empirical results by running the algorithm on a set of 16 MCNC benchmarks and using one-hot encoding for implementation of decomposed machines. For multilevel implementation, the results show an average reduction of 8.52% in area and 81.87% in delay when compared with implementations obtained using JEDI, and an average red uction of 4.40% in area and 104.96% in delay when compared with implementations obtained using NOVA. This scheme has the potential to serve as an alternative to conventional state assignment tools since we observe that it works well for larger finite state machines. It also appears equally promising for reduction of power consumed using gated clock, since the partitions can be chosen such that the corresponding machines have maximum number of self-edges.

 

Estimating the Deadline Miss Probability in Real Time Embedded Systems

Shampa Chakraverty, Netaji Subhash Institute of Technology

C.P. Ravikumar, Indian Institute of Technology

{Sc_12,cpravikumar}@hotmail.com

ABSTRACT

In this paper, we address the problem of assessing the probability that a real-time system based on a heterogeneous multiprocessor bus-based architecture will be able to meet all the deadlines posed by the system specification. We assume that the system is described using a task precedence graph. The execution time of a task is modeled as a random variable with a beta distribution. The designer of a real-time system will have to consider many resource allocation alternatives before freezing the final system architecture. The method described in this paper allows the designer to evaluate these alternatives. When combined with an objective function that also reflects the cost of the system architecture, our estimation procedure can be a powerful tool for the synthesis of cost-optimal real-time systems.

 

An approach towards Hierarchical Detection of ESD Errors in a

Physical Design Database

Ananth Somayaji , Sabyasachi Nag. Texas Instruments India

Gs_ananth@ti.com, sabya@india.ti.com

ABSTRACT

ElectroStatic Discharge (ESD) is the transient discharge of static

charge from one body to another. ESD is caused when two bodies, at

least one of them being charged, come in contact. In the

semiconductor industry, ESD is a serious reliability issue because of

the high voltages and current densities which appear during an ESD

strike. These cause irreparable damage to ICs like gate oxide

breakdown and contact spiking. The ICs become more and more

susceptible to ESD as their dimensions shrink and their complexities

increase. Hence it becomes absolutely essential to ensure that an IC

is safe from ESD throughout its lifetime. Various protection

strategies have been proposed from time to time. These include both

on-chip and off-chip protection. This paper focuses on the on-chip

strategies and proposes a more effective way of verifying them. The

on-chip ESD protection strategy is simple, Clamp the voltage below the

gate breakdown voltage as well as provide a low resistance path during

ESD strike so that the ESD current does not affect the output buffers.

As ICs become more and more complex, just having a protection device

at every pin is not enough because of sneak paths present in the core

of the chip. To take care of all these issues a set of ESD design and

layout guidelines exist which have to be followed strictly to ensure

ESD robustness. Hence it becomes extremely critical to verify that

the layout adheres to these ESD guideline before sending the layout

for PG.

 

An effective methodology to extract Parasitic from the Layout

  1. Rajkumar and Sanjay Kulkarni, Texas Instruments (India) Ltd.

{craj,sanju}@india.ti.com

ABSTRACT

Layout is the physical representation of an Integrated Circuit. Parasitic are unintentional devices that are present in the layout. These parasitic are formed due to the interconnections between the circuit components. The Parasitic will alter the circuit performance by adding unintentional resistance and capacitance. Therefore, for any critical analog designs, it becomes very essential to extract these parasitic devices from the layout and analyze what will be the actual circuit performance. The extracted devices can be modeled as lumped resistance and capacitance and can be added with the schematic for simulation. This flow is called as Back-Annotation.

Lot of EDA tools are available today to extract parasitic from the layout. Each follows different algorithms to extract. This paper won’t talk about the above said topic. This paper is concentrated on integration of all different steps which are involved in parasitic extraction in to one and also about the provision of an intermediate step of giving the parasitic information in the layout.The intend is to help layout designer to do their layout efficiently with less parasitic and also to reduce the cycle time by providing a complete automated flow for Back-Annotation.

The proposed methodology here is an effective way of executing the Back -Annotation flow. This methodology integrates all the steps involved in Back-Annotation,which helps in reducing the cycle time.Also it gives a visual representation of the Parasitic devices in the layout.This helps the layout designer to get an idea of how to reduce the parasitic elements in the Layout.

Operation scheduling in a Reconfigurable Computing Environment

Puneet Gupta, Nitij Mangal

Electrical Engineering, IIT, Delhi.

{puneet_iitd, nitij_mangal}@hotmail.com

Abstract

We have considered the problem of scheduling the operations of data flow in a reconfigurable computing environment. In the recent years, FPGAs have become highly popular as a medium to rapidly prototype complex systems. As the FPGA technology improves i terms of speeds and te number of gates/chip, FPGAs are being used in system construction and not just for prototyping. Some innovations such dynamic and partial reconfiguration of FPGAs has opened new doors to high performance computing using dynamically

reconfigurable FPGAs which uses a limited number of FPGA chips to run a large computation within a time limit. In this paper, our intention is to describe the problem of operation scheduling in such a dynamically reconfigurable environment. We shall first classify the various reconfigurable computing environments and develop an optimizing criterion for problem of operation scheduling. A heuristic algorithm for operation scheduling is then described and its performance is studied on several examples.

IEEE Computer Society

TTTC: Test Technology Technical Council

PURPOSE: The Test Technology Technical Council (TTTC) is a volunteer professional organisation sponsored by the IEEE Computer Society. The goals of TTTC are to contribute to member’s professional development and advancement and to help them solve engineering problems in electronic test, and help advance the state-of-the art. All activities are led by volunteer members.

MEMBERSHIP: TTTC membership is open to all individuals interested in test engineering at a professional level. In addition to the benefits of personal association with other test professionals and the opportunity to serve on a wide range of committees, members receive TTTC Newsletters and announcements.

DUES: There are NO dues for TTTC membership and no parent-organisation membership requirements; however there are substantial reductions in fees for TTTC-sponsored meetings and tutorials for members of IEEE and/or IEEE Computer Society (IEEE and IEEE/CS do have member fees).

NEWSLETTER: Every year TTTC publishes four issues of its newsletter embedded in the magazine IEEE Design & Test of Computers. In addition TTTC publishes several issues of a more comprehensive newsletter that is mailed to all members. The newsletter covers current issues in test, TTTC technical activities, standards, technical meetings, etc.

STANDARDS: TTTC initiates nurtures and encourages new test standards. TTTC-initiated Working Groups have produced numerous IEEE standards, including the 1149 series used throughout the industry.

TECHNICAL ACTIVITIES: TTTC sponsors a number of Technical Activity Committee (TACs) that address emerging test technology topics. TTTC TACs guide a wide range of activities in these topic areas.

TECHINCAL MEETINGS: TTTC sponsors several well-known conferences and symposia and holds numerous regional and topical workshops to spread technical knowledge and advance the state-of-the art.

TUTORIALS and EDUCATION: TTTC sponsors a comprehensive Test Technology Educational Program (TTEP). This program provides opportunities for design and test professionals to update and expand their knowledge base in test technology, and earn official accreditation from IEEE TTTC, upon the completion of four full day tutorials offered by TTEP. TTEP tutorials are held in conjunction with ITC, VTS, ATS, ETW, and DFTS.

TTTC On-Line: The TTTC Web Site at http://computer.org/tttc offers samples of the TTTC Newsletter, information about technical activities, conferences, workshops and standards, and link to the Web pages of a number of TTTC-sponsored technical meetings.

TTTC Officers for 1999

 

TTTC Chair Yervant ZORIAN LogicVision, Inc zorian@logicvision.com

Past Chair Fred LIGUORI ATE Consulting Services ffliguori@aol.com

Senior Past Chair Ned KORNFIELD Widener University ned613@aol.com

TTTC Vice Chair Michael NICOLAIDIS TIMA michael.nicolaidis@imag.fr

TTTC Vice Chair Paolo PRINETTO Politecnico di Torino Paolo.Prinetto@polito.it

ITC General Chair Mike TOPSAKAL topsakal@jps.net

IEEE D&T Editor-in-Chief Yervant ZORIAN LogicVision, Inc. zorian@logicvision.com

Secretary Mouli CHANDRAMOULI Synopsys, Inc. mouli@synopsys.com

Finance Fred LIGUORI ATE Consulting Services ffliguori@aol.com

 

Group Chairs

 

Technical Meetings Dimitris GIZOPOULOS 4Plus Technologies dgizop@4plus.com

Standards Patrick McHUGH Lockheed Martin P.McHugh@ieee.org

Tutorials & Education Michael NICOLAIDIS TIMA michael.nicolaidis@imag.fr

Technical Activities Anthony P. AMBLER University of Texas at Austin ambler@mail.utexas.edu

Asia & Pacific Kozo KINOSHITA Osaka University kozo@ap.eng.osaka-u.ac.jp

Europe Christian LANDRAULT LIRMM landrault@lirmm.fr

Latin America Fabian VARGAS Catholic University - PUCRS vargas@ee.pucrs.br

North America André IVANOV University of British Columbia ivanov@ee.ubc.ca

 

Technical Activity Committees

 

Bare Substrate/Board Christophe VAUCHER T4 Vaucher@bare-board-test.com

Defect Tolerance Claude THIBEAULT Ecole de Technologie Super. thibeault@ele.etsmtl.ca

Vincenzo PIURI Politecnico di Milano piuri@elet.polimi.it

Economics of Test Anthony P. AMBLER University of Texas at Austin ambler@mail.utexas.edu

Magdy S. ABADIR Motorola, Inc. abadir@ibmoto.com

Embedded Core Test Yervant ZORIAN LogicVision, Inc. zorian@logicvision.com

High Level Design & Test Prab VARMA Veritable prab@veritable.com

Iddq Testing Keith BAKER Philips ED&T Baker@natlab.research.philips.com

Manufacturing Test David LEPEJIAN Heuristic Physics Lab. dyl@hpl.com

MCM Testing Yervant ZORIAN LogicVision, Inc. zorian@logicvision.com

Memory Test Rochit RAJSUMAN Advantest America R&D Center r.rajsuman@advantest.com

MEMS Testing Bernard COURTOIS TIMA Bernard.Courtois@imag.fr

Mixed-Signal Testing Bozena KAMINSKA OPMAXX Inc. bozena@opmaxx.com

On-Line Testing Michael NICOLAIDIS TIMA michael.nicolaidis@imag.fr

Software Testing Yashwant k. MALAIYA Colorado State University malaiya@cs.colostate.edu

System Test John W. SHEPPARD ARINC Incorporated Jsheppar@arinc.com

Randy William SIMPSON IDA rsimpson@ida.org

Test Education Mani SOMA University of Washington soma@ee.washington.edu

Test Synthesis Kwang-Ting CHENG Univ. of California Santa Barbara timcheng@ece.ucsb.edu

Thermal Test Bernard COURTOIS TIMA Bernard.Courtois@imag.fr

Verification & Test Jacob A. ABRAHAM University of Texas at Austin jaa@cerc.utexas.edu

 

Standards Working Groups

 

IEEE 1149.1 Christopher J. CLARK Intellitech Corporation cjclark@intellitech.com

IEEE 1149.5 Harry HULVERSHORN LogicVision, Inc. harryh@logicvision.com

IEEE P1149.4 Adam CRON Synopsys, Inc. acron@synopsys.com

IEEE 1450 - (STIL) Gregory MASTON Fluence gregm@fluence.com

Tony TAYLOR Fluence tonyt@fluence.com

IEEE P1500 Yervant ZORIAN LogicVision, Inc. zorian@logicvision.com

IEEE P1532 Neil JACOBSON Xilinx Corp. neil.jacobson@xilinx.com

 

TTTC sponsored Technical Meetings in 1999

 

Feb 23-26 Pacific Northwest Test Workshop Botega Bay, CA - USA E. J. McCLUSKEY

Feb 28 - Mar 02 TWS'99 German Workshop Potsdam - Germany H. T. VIERHAUS

Mar 9-12 D.A.TE'99 Munich - Germany R. ERNST

Mar 22-24 Test Synthesis Workshop Santa Barbara, CA - USA R. C. AITKEN

Mar 30 - Apr 01 Design, Test of MEMS/MOEMS Paris - France B. COURTOIS

Apr 25 IDDQ Testing Mini-Workshop Dana Point, CA - USA Y. K. MALAIYA

Apr 25-29 VLSI Test Symposium Dana Point, CA - USA M. NICOLAIDIS

Apr 28-29 Testing Embedded Cores Workshop Dana Point, CA - USA Y. ZORIAN

May 19-21 Signal Propagation Workshop Titisee-Neustadt - Germany J. P. MUCHA

May 25-28 European Test Workshop Constance - Germany H.-J. WUNDERLICH

May 27-28 North Atlantic Test Workshop West Greenwich, RI - USA J.-C. LO

Jun 06-09 Southwest Test Workshop San Diego, CA - USA W. R. MANN

Jun 15-18 Mixed Signal Testing Workshop British Columbia - Canada A. IVANOV

Jun 16-18 Rapid System Prototyping Clearwater, FL - USA R. LAUWEREINS

Jul 05-07 On-Line Testing Workshop Rhodes - Greece M. NICOLAIDIS

Aug 05-06 VLSI Design & Test Workshops New Delhi - India C. P RAVIKUMAR

Aug 09-10 Memory Test Workshop San Jose, CA - USA R. RAJSUMAN

Aug 17-22 Computer Science Conference Yerevan - Armenia Y. SHOUKOURIAN

Sep 06-08 Electronic Systems Conference Bratislava - Slovakia D. DONOVAL

Sep 12-15 High Density Module Test VI Napa, CA - USA R. J. WAGNER

Sep 15-17 Known Good Die Industry Workshop Napa, CA - USA L. GILG

Sep 28-30 International Test Conference Atlantic City, NJ - USA M. TOPSAKAL

Sep 30 - Oct 01 Production Test Automation Workshop Atlantic City, NJ - USA A. P. AMBLER

Sep 30 - Oct 01 Microprocessor Test Workshop Atlantic City, NJ - USA M. S. ABADIR

Sep 30 - Oct 01 System Test and Diagnosis Workshop Atlantic City, NJ - USA R. W. SIMPSON

Oct 04-06 Thermal Investigations of ICs Workshop Rome - Italy B. COURTOIS

Oct 25-27 1999 DAK Forum Trondheim - Norway E. J. AAS

Nov 01-03 Defect & Fault Tolerance Symposium Albuquerque, NM - USA C. METRA

Nov 04-06 High-Level Design Validation Test W. San Diego, CA - USA A. ORAILOGLU

Nov 16-18 Asian Test Symposium Shanghai - China B. M. Y. HSIAO

 

 

 

TTTC Office

1474 Freeman Drive

Amissville, VA 20106

USA

Phone: +1-540-937-8280

Fax: +1-540-937-7848

E-mail: tttc@computer.org

URL : http://computer.org/tttc