Statutory reporting refers to the financial reporting that helps regulate public companies listed on the world’s stock exchanges and the accompanying requirements detailed by governmental bodies such as the U.S. Securities and Exchange Commission. This is typically a quarterly reporting requirement.
Statutory requirements come from many sources:
- The Financial Accounting Standards Board (FASB). The FASB publishes U.S. financial accounting and reporting requirements.
The service strategy of any service provider must be grounded on a fundamental acknowledgment that its customers do not buy products; they buy the satisfaction of particular needs.
The goal of service management is to transform existing assets, capabilities and resources into value for its customers.
As the theory goes, a project manager is selected for a job and then allowed to gather a team of his or her own choosing. But in practice, teams are often chosen without consulting the manager. You may find out you already have a team when you receive your assignment. No matter the reason why this happened, most of the times you are in trouble.
To solve the problems of having a team imposed on you, consider the following ideas:
- Suggest a different approach. Simply complaining about the way project teams are put together in your organization may not lead to a better idea. It’s much more effective to offer a solution that makes sense to top management. If they recognize the value of allowing project managers to choose their own teams, they will be more likely to allow you to take part in team selection.
Let’s start by identifying these people whom we want to satisfy. I’ll use the broad term stakeholders for these people. Everyone with an interest – in the testing we do and the quality of the final deliverable is ultimately a test stakeholder.
We can divide the list of test stakeholders into external and internal stakeholders. We can choose any number of boundaries between internal and external, but let’s take an obvious one: the internal stakeholders are those doing, leading, or managing the test work, and the external stakeholders are all other applicable stakeholders.
So, who are these stakeholders? The answer varies from one project, product, and organization to the next. However, here are some typical answers, starting from the most immediately obvious stakeholders (the ones we work with daily) to the ones perhaps less obvious but no less important (the ones who ultimately are satisfied that testing accomplished what it must):
- Fellow testers: The people doing the testing work.
Why do we need a risk-based testing approach in our projects?
- Mission-critical information systems are a key part of all large organizations
- The definition of mission-critical and the scope of information systems has changed dramatically
Due to this increased importance of information systems and issues such as the year 2000 problem, software quality is more critical than ever
Research indicates that most organizations are only comfortable testing or maintaining 5% of their application code on an annual basis
- Information systems are increasingly interconnected
- Information systems can be accessed directly by users
- System downtime or defects result in an immediate loss of service to the user, and/or business
Complaint: Reliability: Services are down frequently (e.g. email, hosting, application development, network)
- How to mitigate: Specify and enforce service level agreements (SLAs) that define reliability/system uptime. Manage demand to minimize system demand. Create reliability performance measures for an IT scorecard and use that to manage.
Complaint: Service level: The performance measures defined in our SLAs are not meet
- How to mitigate: Define service levels and expectations through SLAs. Measure service levels through performance management.
Complaint: Cost overruns: Development of new applications always runs over budget.
- How to mitigate: Cost infrastructure is in place to capture costs and avoid overruns. Business case defines planned costs and compares to actual results. New application development has defined performance measures.
There are many key principles that can help a project manager to succeed in a software project. Here we can see 5 of them:
- Project managers must focus on three dimensions of project success. Simply put, project success means completing all project deliverables on time, within budget, and to a level of quality that is acceptable to sponsors and stakeholders. The project manager must keep the team’s attention focused on achieving these broad goals.
- Planning is everything — and ongoing. On one thing all PM texts and authorities agree: The single most important activity that project managers engage in is planning — detailed, systematic, team-involved plans are the only foundation for project success. And when real-world events conspire to change the plan, project managers must make a new one to reflect the changes. So planning and replanning must be a way of life for project managers.
Five basic tools underlie the approach to test management:
- A thorough test plan. A detailed test plan is a crystal ball, allowing you to foresee and prevent potential crises. Such a plan addresses the issues of scope, quality risk management, test strategy, staffing, resources, hardware logistics, configuration management, scheduling, phases, major milestones and phase transitions, and budgeting.
- A well-engineered system. Good test systems ferret out, with wicked effectiveness, the bugs that can hurt the product in the market or reduce its acceptance by in-house users. It also possesses internal and external consistency, is easy to learn and use, and builds on a set of well-behaved and compatible tools. I use the phrase “good test system architecture” to characterize such a system. The word architecture fosters a global, structured outlook on test development within the test team. It also conveys to management that creating a good test system involves developing an artifact of elegant construction, with a certain degree of permanence.
There are many different ways in which Software Metrics can be used, some of which are almost specialties in their own right. There are also many ways in which the domain of Software Metrics can be divided. The approach I prefer is to consider specific areas of application of Software Metrics.
The most established area of Software Metrics has to be cost and size estimation techniques. There are many proprietary packages on the market that will provide estimates of software system size, cost to develop a system and the duration of a development or enhancement project. These packages are based on estimation models, the best known of these being the Constructive Cost Model (COCOMO), developed by Barry Boehm and subsequently updated based on the experiences of many companies and individuals. Various techniques, that do not require the use of tools are also available.
Different types of software require different levels of testing rigor. All code worth developing is worth having basic functional and structural testing, for example exercising all of the major requirements and all of the code. In general, most commercial and government software should be tested more stringently. Each requirement in a detailed functional specification should be tested, the software should be tested in a simulated production environment (and is typically beta tested in a real one), at least all statements and branches should be tested for each module, and structured testing should be used for key or high-risk modules. Where quality is a major objective, structured testing should be applied to all modules. These testing methods form a continuum of functional and structural coverage, from the basic to the intensive. For truly critical software, however, a qualitatively different approach is needed.
Critical software, in which (for example) failure can result in the loss of human life, requires a unique approach to both development and testing. For this kind of software, typically found in medical and military applications, the consequences of failure are appalling. Even in the telecommunications industry, where failure often means significant economic loss, cost-benefit analysis tends to limit testing to the most rigorous of the functional and structural approaches described above.