What is the best way to inject a logger?

I’m working on several projects right now that I’d like to be able to generate logs from.

The problem is that logging isn’t required for the code to work, so how do I configure it?

I see three possibilities:

Required in Constructor

Pros

  • Cleanest implementation

Cons

  • Requires user to pass in a logger even if they don’t want to log anything

Optional in Constructor

Pros

  • No longer requires the user to pass in a logger

Cons

  • Clutters up the constructor
  • I consider using null a code smell

Optional setLogger Method

Pros

  • Clean constructor
  • No nulls

Cons

  • Instantiating with a logger requires two statements

Summary

I’m leaning toward the third method for my packages even though it doesn’t feel like an ideal solution. It just feels better than the other two.

How are you handling injecting an optional logger? Let me know in the comments.

PSR-7 Objects Could Be Immutable

I’ve been thinking a lot about immutable objects lately. Yegor Bugayenko claims that Objects Should Be Immutable and PSR-7: HTTP message interfaces are designed to be immutable.

Messages are values where the identity is the aggregate of all parts of the message; a change to any aspect of the message is essentially a new message. This is the very definition of a value object. The practice by which changes result in a new instance is termed immutability and is a feature designed to ensure the integrity of a given value.

There has been some discussion around the mutability of streams in PSR-7.

Andrew Carter wrote a post discussing why PSR-7 Objects Are Not Immutable saying:

You shouldn’t expect to see the message that was written to the response actually rendered. The code we have written explicitly avoided returning the response by throwing the exception. However, if you disable the Whoops error handler and use the default templated error handler you will find that your message still appears at the top of the error page that is generated by the framework.

Paul M. Jones mentions the immutability of streams in Avoiding Quasi-Immutable Objects in PHP.

If a stream or similar resource has been opened in a writable (or appendable) mode, and is used as an immutable property, it should be obvious that object immutability is not preserved.

This makes sense. If you write to a stream, then rewind it and write it, the value has changed.

I don’t think this makes the class wrapping the stream mutable.

Yegor talks about this idea in Gradients of Immutability.

His argument would be that the object isn’t a constant, but it is immutable.

The object isn’t representative of the content of the stream, but of the reference to the stream.

It’s like if you had an object that interacted with a file.

You might argue that a File object isn’t immutable because if it was $f->read() should have returned “hoopla”. But I disagree.

The File object is wrapping a file name, a pointer to a file in the file system. The object isn’t constant, but it is immutable because it’s data is the file name and not the contents of the file.

I’d be breaking immutability if I added a setFilename method that allowed you to change what file the object was pointing to.

Zend Diactoros

The PSR-7 implementation that I typically use is Zend Diactoros because it’s what used by Radar.

I was hoping to dig in and show how it actually is immutable, but I can’t. Because it isn’t. 🙁

It could have been, but the Stream::attach method makes it mutable.

The strange thing is that it implements Psr\Http\Message\StreamInterface which does not define an attach method.

So Zend\Diactoros\Stream chooses to define additional methods that are not part of the PSR-7 interface, and in doing so, breaks immutability.

Guzzle PSR-7

Looking at other implementations, it would appear Guzzle PSR-7 streams are immutable.

My Git Development Workflow

In my development, I use Git all the time. It was a little tricky to figure out at first but, I feel like I have a good understanding of how to use it.

When I’ve worked with other developers I see them struggling with a few concepts that I use regularly and thought it might be helpful to document my workflow.

On our team, we use feature branches and pull requests. I don’t have a strong preference of whether the feature branch is in our main repo or in a developers fork of our repo. For this post, I’ll assume feature branches are in the main repo.

Let’s start with the following master branch.

I’m going to work on a new feature so I create a new branch off of master and start committing my changes.

During that time, other developers merged in features.

I’m ready to submit my PR, but first I need to rebase off of master, potentially resolving conflicts that emerge.

I need to make changes based on a code review, so I commit those to my repo. Meanwhile, other features are merged into master.

Before I have my new changes reviewed I once again need to rebase off of master.

To push up to the main repo I need to do a force push because I’m overwriting what was previously there.

My PR is accepted, so it’s squashed and merged into master as H (via GitHub), the feature branch is deleted.

Once something is merged into master it’s locked. Other than specific edge cases, you shouldn’t ever have to force push to master, only to feature branches.

With our main SaaS applications I also use Git to deploy to staging and production. I wouldn’t use this for library development or any applications that have multiple deployments.

I have special branches staging and production which our continuous integration platform watch. When I push changes into those branches they are tested (as with all pushes) and if tests pass, they are deployed to that environment.

So if we wanted to deploy my latest feature, I’d merge it into staging and have our team review it to determine if there are any issues that need to be resolved before deploying to production.

If there are changes we create a new feature branch and start at the top of this workflow.

If there are no changes to be made then we merge into production, it’s tested and deployed.

That’s it! Developers I’ve worked with tend to get hung up on rebasing. I know some workflows never rebase and always merge, but I strongly prefer keeping things clean with rebasing.

I know that there are a million ways to use Git and different teams have different workflows. What do you think of my process? Anything you’d change?

Command Line Action-Domain-Responder (ADR)

I’ve created a new proof of concept ADR library that works like Radar, only for the command line interface.

Yesterday I was working on a project in Radar and needed to create a command line tool for it.

In the past, I’ve always used Symfony Console which I like. Since my application was already built using Radar and adhering to Action-Domain-Responder and Clean Architecture, I wanted a solution that was more consistent with the rest of the codebase.

The selling point to Clean Architecture is that your domain logic is separate from the delivery mechanism. So if Radar was my delivery mechanism, I should be able to have a different command line delivery mechanism and my domain shouldn’t know or care.

I took a look at Aura.Cli which seems nice. However, it’s very much a command line library. Out of the box, it’s not a framework for building command line tools adhering to ADR.

So I took a stab at writing one (that uses Aura.Cli), it’s called Cadre.CliAdr.

At this point, it’s more of a proof of concept rather than a production solution. It’s heavily based on Radar and uses the same patterns.

Here is an example of how you’d use it. It will seem very familiar if you’ve used Radar.

Expect revisions and documentation to come. In the meantime, please post any comments here or as an issue on GitHub.

Introducing Cadre.Module

I created a new library that works with Aura.DI to allow specifying dependent modules to be auto loaded.

Today I published a new component Cadre.Module.

This component was born out of my side project that’s using Radar.

Stock Radar

Radar is built around Aura.Di which is a very nice dependency injection container. If you’re interested in learning more about Radar check out Radar Under the Hood.

One thing that’s great about Aura.Di is the concept of ContainerConfig objects.

Here is an example:

This is nice because I can package my DI configuration into smaller classes that configure a single thing. An example of this could be one ContainerConfig that configures Twig and another that configured Atlas.Orm.

The problem I ran into was where to put things if they are related to different areas. Also, there was an issue about how to make things work between dev and production if they needed a different configuration.

The specific use case that prompted the development of this library was configuring PHP Debug Bar which I’ve been working with a lot lately.

PHP Debug Bar can collect data from many different sources. I’m using it currently with Twig and Atlas.Orm. When I push my project to production, I will not want to be loading PHP Debug Bar, but it’s very useful during development.

It’s also possible that I’ll want to reuse this code in the future on projects that may not use Twig and/or Atlas.Orm.

Cadre.Module

In Cadre.Module I introduce the Module and ModuleLoader classes. Both are useable as ContainerConfig objects. Modules define four additional methods that are inspired by Composer.

Each module can define what other modules they require, require in dev, conflict with and replace.

I started out also implementing provide and suggest, but decided that they had limited to no application in this use case.

While the new modules are just ContainerConfigs with extra metadata, the module loader does the heavy lifting.

While still behaving as a ContainerConfig, a module loader starts out with a list of modules in the same way composer.json defines your starting packages.

When it’s used, it goes through and loads each of the modules and loads their required modules. It also checks for conflicts and replacements along the way.

The define and modify methods just proxy through to all of the loaded modules.

The other neat feature of Cadre.Module is that modules can query the module loader to see if another module is loaded.

This is especially useful for optional modules or development modules that may not always be present.

Conclusion

I’m interested in what you think of Cadre.Module. Check out the documentation and ask questions either here in the comments or as an issue on GitHub.