Values Driven Development (VsDD) Part 2


In my previous post I described how we should create a list of values to drive our development.

In this post, I’ve created a fictional picture of how values might fit into the pipeline of work.

You could argue that Values should come first, and I would agree, but typically they are often secondary.

You may also notice the controversial inclusion of Religious beliefs into the value chain. Whilst the wise know that a professional work environment works best when religion (and politics) are left at home, we can’t ignore that an individuals belief system will, and does, affect the choices they might make.

Values Driven Development

 

 

Values Driven Development


I hereby coin the phrase ‘Values Driven Development‘ (Value with an S).

Please do not think this is a Software Methodology like Test Driven Development, Behaviour Driven Development, or even Stackoverflow Driven Development, which is similar to asking your cuddly toy on your desk what the problem is.

Values Driven Development is the philosophy and methodology of using ethical, moral, professional and aspirational values to drive a software development company versus the current perception of value (aka money).  Think of it akin in style to the Agile Manifesto‘s values, but prevalent at all levels.

Although there are overlaps with Tom Gilb’s Value Driven Development, my proposition is inherently different.

I will describe how a typical perception of Value could restrict corporate growth and employee satisfaction when coupled with Agile methodologies. I’ll explore some example values and how they could influence the decision making process. There’s nothing scientific, no hard data, just opinions.

None of the following statements are, in anyway, to be associated with the internal workings of MooD International.

The risks of agile pipelines

It may be no surprise that software businesses want maximize their Return On Investment, driven by C-Level ‘Outcomes’. This typically trickles down the employee stack into a value driven pipeline (Value without an S). User Stories have business value -  or they don’t get done. The backlog is filled with a large stream of valuable work that is aligned to some business outcome. The teams will be busy being successful. Sounds great, right? Let’s high five our Agile process!

Targets are targets, you’re very likely to get what you want, or probably less, very rarely more. If you think “We value exceeding targets over meeting them” is the answer, then you are a sheep. There is a huge risk that the outcomes themselves have limited the potential for growth.

Agile/Scrum can make matters worse as the teams can be lead into a false sense of security, thinking that the best work they could be doing is in this ‘value pipeline’. Depending on how good your PO is, or how confident your team is at jumping around the PO, it’s hard for the team to challenge that. The culture can quickly become purely ROI/Value driven. Which sounds great to some, but you only get what you ask for.

Usually the best ideas occur when people stop and think, or just try something else out. But when there’s targets, and a stack of ROI priority stories – failure isn’t an option, and there’s little capacity for free-thought, innovation or taking risks. Scrum can stimulate a highly intense and stressful work pattern, it’s not called a ‘walk’ its called a ‘sprint’. In all the amazing speed agile provides, it can give the impression that value is being delivered, whilst driving into submission talented, creative individuals. Today’s developer has to tolerate a vast amount of complexity in their work. Such complexity requires an amazing amount of brainpower, willpower and focus. Are we getting the most value out of them? – is the big question.

Is Agile broken?

No, of course not. The Product Owner should be responsible for making sure that they are fostering an awesome team environment, but sometimes that can be lost due to a sustained barrage of ‘must do’ work. You could argue that one possibility is to spark up an R&D agile team to allow creative software types to design and prototype new cool toys in their own playpen. – Enter the land of team-envy and reduction in team morale for all other teams! Besides, that may use only a fraction of your team’s potential. Imagine there’s a dev called Jo, Jo had all the bright ideas last year but is critical to Project X, so he’s not allowed in the R&D team. He might actually make you a few million with his next crazy idea, but still I don’t believe the answer isn’t to move him.  That’s just the talent you see, rather than the talent yet to be discovered.

What to do?

Change the culture. Put values first and foremost, from the top down. Make the values public, caring, meaningful. Foster the independent growth potential of all employees, not just the rockstars you know about. Let those values help guide ROI and value judgements and where necessary produce examples of how those values could manifest and change decision making.

Example Values

We value supporting employee’s creativity and innovation over continual backlog work.

e.g. Rotate employees out of a sprint to do what they want, and show the team afterwards.

We value continuous professional development over ‘resting on your laurels’.

e.g. that conference they’ve been going on about, send them on it. Insist employees keep learning, and provide time for it.

We value telling the customer the project will be late over making teams work unreasonable hours.

e.g. The cost of recruitment, burnout, morale is not free. Quality reduces in tired employees. Demonstrating integrity, honesty and caring goes a long way.

We value repeated, sustained business with customers over impulsive or risky delivery schedules

We value paying employees their fair market rate over just what we can get away with

We value quality over quantity

We value the work you’ve not done over blindly following a request

We value face to face communication over email

We value focusing on getting a single job done over having many jobs ‘in progress’.

We value failure of a task over never having tried something challenging

We value our reputation for tight web security over any customer deadline

Where’s the profit?

What’s the cost if you don’t?

 

My next post shows a diagram of how I see the hidden world of values being used to drive a more focused set of work items aligned to values.

NIST Cyber Security Framework Launched


On the 12th of February 2014, the United States of America’s National Institute for Science and Technology (NIST) announced a voluntary Cyber Security Framework which provides through-life management of an organization’s Cyber Security Programme based on existing standards, guidelines, and practices.

You may ask “what relevance has this outside of the USA?” – well, I imagine if you want to do business in the USA, or your major customer does, you might need evidence of self-assessments and implementations of this framework. Don’t believe me? Check out this quote from the NIST roadmap:

“…Engaging foreign governments and entities directly to explain the Framework and seek alignment of approaches when possible; Coordinating with federal agency partners to ensure full awareness with their stakeholder community; Working with industry stakeholders to support their international engagement;”

Initially I thought this framework’s focus would be on reducing risks to national infrastructure, but at its heart, it is designed to be used by any organization. That’s because small companies are generally understaffed, softer targets, and a weak link in the chain.  Hacking the right SMEs could cause a butterfly effect. An FBI awareness initiative in 2012 highlights this weakness, Cyber Security is everyone’s responsibility. (I love that phrase, I’m sure someone had Deming in mind).

“Organizations can use the framework to determine their current level of cybersecurity, set goals for cybersecurity that are in sync with their business environment, and establish a plan for improving or maintaining their cybersecurity.”

The framework claims to provide a common language for managing ‘cyber risk’ and addresses business protection needs in a cost effective way. The framework’s roadmap intends to extend guidance to supply chain risk management and privacy protection.  This is a voluntary framework, but one would suspect that some government departments and sub-contractors may not necessarily have a choice. Non-regulation makes sense, especially for a new framework, even more so in a chaotic technology environment. Requirements may emerge for companies to validate their Cyber Security Risk Management Process against the framework, either as part of a bidding process, or sub-contractor/outsourcing evaluation process in the future – which can’t be a bad thing, unless used as a sledgehammer or blanket policy for non-critical services.

The framework can help organizations start managing cyber risks from scratch, or be used to identify gaps in current process and thinking to drive outcome focused activities.

It has three main components.

  • The Framework Core – provides a set of activities to achieve specific cyber security outcomes, and references examples of guidance to achieve those outcomes. The core helps plan activities need to cover the evolution of cyber risks (functions) -  Identify, Protect, Detect, Respond, and Recover.
  • A Framework Profile specific to an organization’s requirements, risk tolerance and resourcing issues. Aligns elements of the Framework Core to achieve the desired outcomes. An organization may have multiple profiles depending on department, country, or ‘current’ and ‘future’ aspirations, leading to different elements of the Framework Core being chosen.
  • Four Implementation tiers -from Partial to Adaptive. Each Tier represents an increasing degree of rigor and sophistication in risk management practices, but is not an indication of maturity, as it depends on the threat environment of the organization. Having said that, Tier 2 looks like a bare minimum target for any technology company.

Where do I find out more?

Read the release announcement  NIST Releases Cybersecurity Framework Version 1.0.

Read the framework document v.1.0 (41 pages)

I recommend reading the roadmap (9 pages)  to get an overview of framework’s maturity, scope & future.

Follow NIST on twitter @usnistgov.

Visit the framework web site.

Agile code reviews


Here’s the TL;DR on this video from Olivier Gourment on Agile Code Reviews.

The video refers to slides you can’t see (extra download), historical facts and books, and audience participation, here’s the short version:

  • Have a check-list for things you should do in your code review.
  • Why? Because  humans make mistakes.
  • Mistakes are expensive and can even can cost lives.
  • Many industries used check-lists very successfully to reduce fatalities, improve safety, quality and to eliminate the potential for human error. It has been proved to work tremendously well.
  • Humans tend to give up if the items in the check-list are out of date, lack value alignment or are too numerous.
  • Agile teams can use check-lists in an agile way – constantly inspect and adapt them.

How do we Introduce them?

  • Humans naturally choose to rely on habits and behaviours that they find easier.
  • If you want to make a change in your organization, you have to make it a habit….. Friends don’t let friends get away with committing  un-reviewed code.
  • Keep it simple, short and valuable.
  • Once it is in place at a basic level, and habitual, the team can start adding items.
  • Code reviews are best done together. The conversation alone can invoke better ideas.
  • Decide as a team what the policy is for interrupting/scheduling reviews.
  • Try not to have more than 1 day’s worth of code, or 200 lines per review.
  • Senior developers can be reluctant to expose their code to reviews, or perform them. – it is a difficult challenge.
  • Reluctance issues can be resolved by getting those individuals to elaborate what check-list they would create for another developer, find out what is valuable to them. (Ownership and autonomy)
  • Be wary of adding contentious items, they will affect buy-in. For example,”Errors should be caught” – is non contentious. “Coding style” – is more contentious.

Do you do code reviews in your organization? Do they help train good behaviour, prevent escaped bugs? Or are they tiresome chains around your wrists? Discuss below!

Pragmatic SOLID – Part 5 – The Dependency Inversion Principle


High-level modules should not depend on low-level modules, both should depend on abstractions.

Abstractions should not depend on details. Details should depend on abstractions.

TL;DR

This is not a simple topic to describe in isolation. The Dependency Inversion Principle (DIP) fits in with a combination of multiple patterns, principles and tools. For instance:

  • The Strategy Pattern
  • The Interface Segregation Principle
  • The Single Responsibility Principle
  • The Ports & Adapters/Domain Driven Design/Onion design
  • Inversion Of Control Containers
  • The Hollywood Principle
  • The Open/Closed Principle
  • The Repository Pattern (when implementing a real system)

To keep this short, this article will try not to involve too many of them. The minimum facts you need to know are:

  • Dependency Inversion is the architectural model, Dependency Injection is a method to realize it.
  • Your class’s constructor includes the services your class depends on, specified as very abstracted interfaces, rather than your class instantiating them.
  • You should have very few ‘new’ statements in the logic core, yes – really.
  • Avoid static methods on classes, they prevent mocking and tie you into a concrete implementation.
  • Use an Inversion Of Control container (like StructureMap) to make life ‘easier’.
  • Think about having a design policy to guide consistency within your team.
  • Do not create an interface for low level domain model classes by default.

What’s the business value?

Changing implementation details of dependencies becomes easier for developers, to the point that providing an alternative implementation (e.g. SMS instead of email) could be as simple as dropping in a new assembly and changing a configuration file. The code lends itself well to test automation, and will (generally) be easier to read due to cleaner separation of concerns. However, it requires discipline to maintain a consistent application of the theory. Discovering which class implements an interface is not well supported in the default tooling, so some investment in tools/architecture may be required.

That’s a lot to take in, where do I start?

Before describing how dependency inversion is achieved in C#,  there are some fundamental architecture concepts that need to be understood. The traditional three tier model of separation of concerns doesn’t suit the dependency inversion pattern, but a variant of it does, the Onion Architecture. Its benefits and application are best described by Jeffery Palermo’s blog – Onion Architecture, and a summary of developments can be found in The Clean Architecture by Bob Martin.

In summary, the goal is to make a core  module be the focus of our application, which all other modules will depend on. The core will define the domain (typically, ‘business objects’) classes and the interfaces they require interaction with. The core will not implement interfaces which are external dependencies. Those must be injected into the core classes by implementing the interface required in a higher layer. This allows strong decoupling of infrastructure items such as third party libraries, the database, configuration files, notification mechanisms (email, push, sms), security, encryption, persistence. This allows changes to implementations of these components with reduced risk to the core and core classes can be surrounded by tests with minimal dependency setup. The ideal situation is that each dependency can be mocked.

How does this differ from a three tier architecture?

A typical 3 tier architecture of a User Interface Layer (UI) instantiating Business Logic Layer (BLL) classes, which in turn instantiate Database Access Layer DAL (or external service classes) is remodelled.

Let’s say in our Onion design, that the BLL is now the core, the lowest point in the dependency chain.

It will contain:

  • domain model classes (E.g. User, Basket, Product).
  • the abstract definition, as interfaces, of services the core requires from outer layers of the onion,  E.g. IUserRepository, ICheckout, IUserQueries, ILogging, INotificationService, some of which might have domain objects as required parameters.
  • concrete service classes -  providing services that bind these abstract definitions together with rules E.g. LogUserDetails(ILogging, IUserQueries, int currentUserID)
  • services which use other services within the same core. Typically represented by interfaces for consistency, but implemented in the core. This over-engineering only benefits testing.

The Data Access Layer will reference the core, thus can use the business objects and logic, which is an inverse of the traditional 3 tier model. One typical overkill at this point is to provide an interface for every core class. That is not absolutely necessary and will cause bloating fast. If you want to see some pictures – head over to Jeffery Palermo’s blog – Onion Architecture.

We are also not treating critical frameworks as dependencies we need to invert. (We do not intend to inject .NET!)

Typical examples of interfaces we might define in core, but implement outside of the core are:

  • Persistence interfaces such as databases and configuration files. E.g.  IUserRepository, IConfiguration
  • Communication (Email, Push, SMS), E.g.  INotificationService.
  • External Web Services/communication to 3rd party systems E.g. IGridComputing
  • System resources e.g. ISystemInfo – we could then test how the code behaves on different platforms.
  • Authentication and authorization services
  • Thread.Sleep, Random value generation, clock – these might be best used with a function like in this fantastic post. (I believe this is an ambient context pattern).

Example

I found some reasonably good examples here. So why create my own?

Dependency Injection For Dummies -  Kevin Pang

The Dependency Inversion Principle – Gabriel Schenker. Although I would say that the DoEncryption method is ripe for being an inverted dependency, we might need different algorithms depending on different export laws.

Layered Architecture, Dependency Injection, and Dependency Inversion – Boodhoo, Jean-Paul S

More on DI…

As mentioned, in a typically three tier architecture: high level modules tend to call low level modules. It’s quite typical for a UI App to ‘new up’ a business logic class and call methods and new up other methods to create dependencies to help it perform its function. The UI becomes the controller for the dependency chain, doesn’t sound right does it? The UI’s responsibility is not just the UI in this scenario. Static methods are then usually created to decouple access to this. In general, static methods prevent mocking and tie code to concrete classes, so aren’t terribly friendly beasts. There is a whole mountain of brittleness with this model. Thus, the DIP and IoC Containers allow us to decouple concretions and plumbing from all layers.

Once we’ve injected all our interfaces into our classes via a constructor, you may think “we have broken encapsulation and information hiding”, by exposing the internals of a class. This was my first reaction too. But conversely, hiding class dependencies makes class interfaces harder to test, they won’t be loosely coupled and they are more ambiguous and not self documenting. With tools like StructureMap, you can automatically create the constructor parameters, so in reality, you do not have to actually call the fully verbose constructor, dulling my objections.

There are a few mechanisms through which you can inject dependencies. Constructor injection is not the only way (although it is usually the lesser of the evils).

Constructor injection

+ Classes always valid once constructed

+ Contract is obvious and clean, “to use me, you need X Y and Z”

- Can be ugly

- May also need a default constructor for serialization

- Not all dependencies may have been needed to execute the one method from the class you needed – but that could be a design smell (SRP).

Property (or setter) injection

+ Can tweak the interfaces at any time

- Objects can be an invalid state at any time, more error prone.

- Less intuitive – how do you know if there’s an order, or what minimum set of setters need to be called?

Parameter  injection

+ Each method has the exact dependency passed.

+ granular & flexible

+ Great if you only have one method that requires this dependency.

- Adds to the method signature, which is brittle

- Can result in huge parameter lists.

- Can be very repetitive (WET)

The Middleton Rule

  • Look before you leap! This isn’t something that is trivial to apply after you have finished your project.
  • Be consistent with the application of it.
  • Apply DIP for testing, but try and have more reasons (architecture, pluggable)
  • Use different assembles for each layer
  • Don’t create an interface for simple domain objects unless you really need to.

More References

Agile Principles, Patterns, and Practices in C# – Robert C Martin

Inversion of Control Containers and the Dependency Injection pattern – Martin Fowler

StructureMap

Onion Architecture by Jeffery Palermo

Domain Driven Design by Eric Evans

Professional Asp.NET Design Patterns – Scott MIllett

SOLID Principles of Object Oriented Design on Pluralsight – Steve Smith

Creating N-Tier Applications in C#, Part 1 on Pluralsight – Steve Smith

Creating N-Tier Applications in C#, Part 2 on Pluralsight – Steve Smith

Pragmatic SOLID – Part 4 – The Interface Segregation Principle


Clients should not be forced to depend on methods they do not use.

TL;DR

  • Do not let client code use large classes which expose them to risk.
  • Split these classes into smaller classes (Single Responsibility Principle), or Interfaces.
  • Keep class interaction lean and focused.
  • Let interface design be dictated by the consuming clients, or by a logical service a subset of methods provides (e.g. Logging, UIConfirmation).
  • Interface Segregation does not necessarily mean using the Interface keyword, but it is designed for the purpose!
  • Automatically creating an Interface every time you create a class or method will lead to trouble.
  • An interface can also be too big – and may require segregating further.

What’s the business value?

By avoiding a monolithic object (or interface) from which developers pick and choose arbitrary methods, code is easier to decouple, test and maintain. There will generally be fewer (unnecessary) dependencies and less code to review when making a change to a smaller interface.  However, over-application of the principle can create a large amount of code complexity through over-simplification for no real gain; e.g. creating a low level  interface with just one method which is only consumed by one class.

Come again?

This principle ensures cleaner code and  prevents creation of one class to rule them all, which is typically knitted into the entire code-base. This scenario often occurs when a primary central object exists, and over time, developers add single methods to it in a hurry. They typically add their function, show some working software and walk away. Left unchecked, they or other developers follow suit. Soon there’s 10 new methods on a class, polluting its very fabric.

Any seasoned developer knows that code is read and maintained far many more times than written, so it is a fundamental principle to make sure we optimize for the reader and produce a good architecture that doesn’t leave a new starter scratching their head. Having to scroll through endless large function lists makes it harder to maintain. It is also no longer clear what the primary purpose of the class is, and developers will be less keen to refactor the class as it becomes larger and larger. If all client classes take this monolithic beast as their constructor argument, it is never clear what aspect they really depend on.

By applying interface segregation and only providing client classes with  the interfaces they need, developers can

  • introduce less risk when changing the code
  • easily plug in different implementations and have lean abstractions
  • easily provide fake implementations for testing

Example

You can use classes to provide interface segregation through the façade pattern – much like applying the single responsibility principle. However, the more typical way is to create an Interface, and declare that your class inherits all interfaces your class provides. In C# multiple inheritance of classes is not supported, but multiple inheritance of interfaces is. One could also not use inheritance at all, and provide properties to get to interfaces, but you would lose the ability to cast objects to their interfaces.

Lets take the final example in our Single Responsibility Principle tutorial and see how we can make sure that we do not expose a PrintAGraphInA4 method to more than it needs to know. I’ve stripped out the dependent classes and changed the Main() routine so that it simply tries to call a helper method to Print the graph.
Here’s the code before ISP.

   namespace ISP
   {
        class Printer
        {
            public enum PaperType { A4_PAPER, A3_PAPER };
            public PaperType Paper { get; set; }
        }

        static class Program
        {
            // small, fictional program to load up a previous graph settings file
            // and print the graph
            static void Main()
            {
                MyGraph graph = new MyGraph();

                graph.Load(@"C:\settings.txt");

                PrintAGraphInA4(graph);
            }

            static void PrintAGraphInA4(MyGraph graph)
            {
                // Note - the code only needed access to the Print Method
                // but was exposed to Load/Save/MathModel/GraphTitle
                Printer printer = new Printer();
                printer.Paper = Printer.PaperType.A4_PAPER;
                graph.Print(printer);
            }

        }

        public enum TransformationModel { none, modelX1, modelX2, modelX3 };

        class MyGraph
        {
            private MyGraphSettingsModel settings = new MyGraphSettingsModel();
            private MyGraphDataModel dataModel = new MyGraphDataModel();
            private MyGraphStorage storage = null;
            private MyGraphPrinter printer = null;
            private MyGraphPointsCalculator calc = null;

            public MyGraph()
            {
                storage = new MyGraphStorage(settings);
                printer = new MyGraphPrinter(dataModel);
                calc = new MyGraphPointsCalculator(dataModel);
            }

            public TransformationModel MathModel
            {
                get
                {
                    return settings.Model;
                }
                set
                {
                    settings.Model = value;
                    calc.CalculatePoints(settings.Model);
                }
            }

            public String GraphTitle
            {
                get { return settings.GraphTitle; }
                set { settings.GraphTitle = value; }
            }

            public void Load(String fileName)
            {
                storage.Load(fileName);
                calc.CalculatePoints(settings.Model);
            }

            public void Save(String fileName)
            {
                storage.Save(fileName);
            }

            public void Print(Printer printerDevice)
            {
                printer.Print(printerDevice);
            }
        }
   }

In the example above, the function PrintAGraphInA4 is now tightly coupled to a MyGraph class. We could have ‘solved’ that by creating a base class of Graph with overridable Print methods. However, that still leaves the PrintAGraphInA4 function exposed to a number of methods it does not need (Load/Save and more). This means we have to review PrintAGraphInA4 when MyGraph is changed. Thus creating a base class isn’t the silver bullet.

The more useful thing to do would be to create an IPrintable interface. This would allow our function to be reused by more than just graphs. It also means that ONLY changes to the IPrintable interface or the implementation of the graph’s IPrintable will be the catalyst for a code review of PrintAGraphInA4.

Here is the minimal code change:

        static class Program
        {
            // small, fictional program to load up a previous graph settings file
            // and print the graph
            static void Main()
            {
                MyGraph graph = new MyGraph();

                graph.Load(@"C:\settings.txt");

                // because inheritance is used for MyGraph : IPrintable
                // we can simply pass the graph to automatically cast to IPrintable
                PrintAnyThingPrintableInA4(graph); 
            }

            static void PrintAnyThingPrintableInA4(IPrintable printable)
            {
                // we do not care what we are printing. 
                Printer printer = new Printer();
                printer.Paper = Printer.PaperType.A4_PAPER;
                printable.Print(printer);
            }
        }

        interface IPrintable
        {
            void Print(Printer printerDevice);
        }

        class MyGraph : IPrintable
        {

We could create interfaces for IGraphStorage and IGraphSettings, but only if is beneficial.

What if I need to pass multiple interfaces to a class, should I create an aggregate?
Bob recommends not passing the aggregate interface object around if multiple interfaces are used in a client method or class. In nearly every case it would be preferable to pass the specific interfaces.

So, when should I join interfaces together?
If, for example, a class has been split into a number of interfaces to cover client usage patterns but two interfaces are always used together to deliver the full service to the client, it might make sense to combine those interfaces into one. This is especially true if both interfaces implement one or more methods in the same way and you believe there was an over zealous implementation of ISP.

Should I develop a service based model or a client based model?
A service model delivers an interface whose methods provide a distinct service to clients. We might develop such an interface in the absence of any client code. E.g. IPushNotificationService, with a simple SendNotifications() method.

A client model delivers an interface whose methods provide only what is required from empirical analysis of client usage patterns. If left unmonitored, this can introduce many client-specific interfaces tailored to specific needs of individual classes/methods. This can really help minimize the impact of change, but also can lead to inflexibility and prevent reuse.

The Middleton Rule

  • If clients of a class consistently only use a subsection of methods, it’s likely those methods should be an interface.
  • If you are passing a large object around because it’s easy – you are only storing up technical debt. Apply some form of ISP.
  • Don’t get silly. A gazillion “one method interfaces” will drive you crazy.
  • When you start to think “I shouldn’t change this class because so many others might use it and I don’t know the impact”, you’re probably exposing too much.
  • If you find fat interfaces are causing a huge dependency chain for your production code (and tests) , apply some more ISP.
  • For fat 3rd party interfaces, create your own Adapter in the middle and separate interfaces.

References

http://www.oodesign.com/interface-segregation-principle.html

http://www.objectmentor.com/resources/articles/isp.pdf

My personal quick reference for Oracle


Working with Oracle is traumatic enough without having to remember things about it too.

Clearing caches

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH SHARED_POOL;

http://www.dba-oracle.com/t_flush_buffer_cache.htm

Shutting down

CONNECT SYSTEM@instance/pwd AS SYSDBA
SHUTDOWN NORMAL;
then wait for everyone to disconnect.
Or, wait for active transactions to finish by
SHUTDOWN TRANSACTIONAL;
Or, rollback active transactions and disconnect clients
SHUTDOWN IMMEDIATE;
Or if that fails...
SHUTDOWN ABORT;

http://docs.oracle.com/cd/B10501_01/server.920/a96521/start.htm#6398

Discover the oracle instances’ parameter configuration

    column c1 heading 'Name' format a20;
    column c2 heading 'Value' format a20;
    SELECT NAME c1, DISPLAY_VALUE c2 FROM V$PARAMETER;

Explaining Plans

EXPLAIN PLAN FOR <mysqlstatement>;
then
SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
or
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR);

How much free space in tablespaces?

SELECT /* + RULE */  df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+)  = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
fs.bytes / (1024 * 1024),
SUM(df.bytes_free) / (1024 * 1024),
Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,bytes_free,bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
WHERE fs.tablespace_name (+)  = df.tablespace_name
GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
ORDER BY 1 DESC;
SELECT * FROM DBA_TEMP_FREE_SPACE;

Rebuilding indexes

http://psoug.org/reference/dbms_index_utl.html

Revert to an older version of the query optimizer

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.2.0.4';

Discover what previous versions you can use by..

SELECT value FROM v$parameter_valid_values WHERE name = 'optimizer_features_enable';

Shrinking the temporary tablespace

First, remove any locks by performing a shutdown (follow the steps above for shutting down).

SHUTDOWN NORMAL;

Find what the temporary file is called internally by examining the output from

SELECT * FROM V$TABLESPACE;

Lets imagine it’s TEMP.DBF as below.

Shrink it

ALTER DATABASE TEMPFILE 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\TEMP.DBF' RESIZE 1000M;

or, to make a restricted size tempfile

CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'c:\oraclexe\app\oracle\oradata\xe\temp2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 1100M;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;
DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;

You might have to alter the default for specific users too.

http://dbafix.blogspot.co.uk/2010/08/how-to-drop-and-recreate-temp.html

Slow oracle queries using cartesian joins

My experience of Oracle 11gr1 was never a pleasant one. But I did encounter a situation where the query optimizer would chose a terrible cartesian join and bloat temp DB to the eyeballs. This was fixed in 11gr2, as far as I can tell.

create or replace
TRIGGER moodsessions AFTER LOGON ON SCHEMA
BEGIN
execute immediate 'ALTER SESSION SET "_optimizer_cartesian_enabled"=false';
END;
/

Hints in SQL Queries

SELECT /* +RULE*/ blah FROM blahTable; // rule based optimizations
SELECT /* +ORDERED*/ blah FROM blahTable; // joins should be done in the order I say
SELECT /* +OPT_PARAM('_optimizer_cartesian_enabled','false') */ blah FROM blahTable; // disable cartesian joins
SELECT /* +OPT_PARAM('optimizer_search_limit',2) */ blah FROM blahTable; // do not test 4 joins, just test 2.
SELECT /* NO_QUERY_TRANSFORMATION */ blah FROM blahTable;

http://www.dba-oracle.com/art_otn_cbo_p1.htm

Meta data

You can substitute the prefix user_ for all_ to restrict or expand the results in or out of the current user’s objects.

SELECT table_name,constraint_name FROM user_constraints;

http://ss64.com/orad/USER_CONSTRAINTS.html

SELECT trigger_name FROM user_triggers;

http://ss64.com/orad/USER_TRIGGERS.html

SELECT table_name, index_name from user_indexes;

http://ss64.com/orad/USER_INDEXES.html

SELECT table_name,column_name,nullable FROM user_tab_cols;

http://ss64.com/orad/USER_TAB_COLUMNS.html

Are there invalid triggers in my database?

SELECT * FROM USER_OBJECTS WHERE OBJECT_TYPE='TRIGGER' AND STATUS='INVALID'

Compiling triggers

A subtlety in Oracle is that triggers are not compiled when REPLACE is used. So compiling them before they fail on first use is handy.

ALTER TRIGGER <SCHEMANAME>.<TRIGGERNAME> COMPILE;

IF EXISTS equivalent in Oracle

IF EXISTS is really handy in SQL Server, but Oracle is behind the times here. What is a one liner in SQLServer turns into this:

DECLARE
L_EXIST NUMBER;
BEGIN  
 SELECT COUNT(x) INTO L_EXIST FROM /* MYQUERY*/;  

 IF L_EXIST>0 THEN  
  /* DO SOMETHING */  
 END IF;
END;

Unlocking an account

alter USER myaccountname ACCOUNT UNLOCK;

Preventing password expiry

You can discover the profile which needs altering by running…

select username,profile,lock_date,expiry_date from DBA_USERS where username='myusername';

Then… (assuming the profile is DEFAULT)

alter profile DEFAULT limit password_life_time UNLIMITED;

ORA-06550, PLS-00103 Encountered the symbol <blah> when expecting one of the following

Oracle is a pain. When there’s no obvious typo like a missing quote or just end-user stupidity in syntax application you might experience this cryptic error.

One of my common pitfalls is trying to execute schema level operations inside some normal SQL logic branch. E.g. If index B exists, drop it.

Simply put, DDL likes to run without other SQL statements around. This can happen when sending SQL via ODBC just as easily as it can within the PL/SQL window.

http://arjudba.blogspot.co.uk/2009/02/how-to-run-ddl-statements-within-plsql.html