WordCamp Paris 2023 recap

Introduction

I have been honored to get once again selected to be a WordCamp speaker, this time at WordCamp Paris on April 21st. It was my second WordCamp, and as I did for Lyon, here’s a recap of this very nice day spent alongside the WordPress community.

Being selected as a speaker

I’ve had a great time being a speaker in Lyon. It was my first WordCamp, and also my first as a speaker.

I prepared myself a lot for this day, built a thorough FSE talk, and rehearsed the presentation. It was a lot of work, and also I must admit a lot of stress, which made me wonder if I would do it again.

Then the WordCamp Paris call for speakers went live, with different talk formats compared to Lyon which was focused on conferences. In Paris, you could propose either a lightning talk or a “parlotte“, which is a more participative format made to engage the conversation with your audience. I found this new format interesting because more interactive and less formal.

Furthermore, at the time, I was right in the middle of a very interesting discussion with Florian Truchot. We were debating the optimal use of blocks in the context of different advanced scenarios. We enjoyed exchanging and agreed on some very interesting conclusions. From there came the idea to continue the conversation with the community and propose a parlotte about the search for the optimal use of blocks to meet high business requirements.

After a few weeks, we got the talk approved. It was time to prepare again, but in a different format, and also as a duo this time.

Preparing the slides

At the time, we were facing different situations in our respective jobs, and that naturally gave us the talk outline. We would discuss 4 scenarios, and analyze what could be the best use of blocks for each of them while paying attention to avoid particularly bad uses.

Here are the 4 scenarios :

  • Creating a theme that makes heavy use of blocks
  • Integrating with a Design System
  • Website industrialization (production at scale)
  • Headless WordPress

We had a great time preparing the slides via collaborative sessions and made adjustments until we found the material was ready.

Conference eve

As a speaker, I got invited to the pre-conference dinner with all the other speakers, the organizers, and the sponsors. It was at the Stalingrad Rotonde, where the food was great.

I reconnected with friends I made in Lyon, and had the chance to have interesting conversations with the guests there. We were all excited about the coming day, and couldn’t wait to deliver our talks, meet the attendees, and watch the other conferences.

Conference day

The event was located in the Pajol Halles in north Paris.
There were three different conference spots: one theatre for big presentations, one medium conference room, and a smaller one.

Here’s a recap of the talks I’ve attended:

Gutenberg, comment le transformer en outil surpuissant pour le SEO ? By @rochdaniel

Daniel Roch

I started the day with Daniel Roch, who presented us with plenty of opportunities to improve our SEO via the use of Gutenberg blocks or capabilities. I’ve learned a lot from this talk. I discovered concepts I didn’t know well like the “search intent“, “position 0“, or silos. Overall a very good talk that I advise you to watch in replay when available on WordCamp.tv.

Constructeurs de page et développement sur mesure : les bienfaits de l’approche hybride. By @Maximebj

Maxime Bernard-Jaquet

In his talk, Maxime presented many clever ways to benefit from the use of existing technologies, libraries, and builders, while leveraging hooks, to augment the underlying functionalities with custom code. Indeed, I often see agencies investing a lot in code, rebuilding features that already exist, sometimes reinventing the wheel while burning cash, instead of making decisions on where to code, and where to reuse. Maxime’s talk was full of advice for that matter.

La Sécurité Web et WP en 2023. By @JulioPotier

Julio’s reputation in web security is well-established. It’s always interesting to listen to a specialist, and I wasn’t alone to believe that, the room was packed. It was a parlotte format in which Julio let his attendees ask plenty of questions, which he answered thoroughly.

Exploiter les pleins potentiels du Design dans son projet WordPress. By @bter_design

Joffrey Jochum

As I enjoyed his design talk in Lyon, I decided to go listen to Joffrey again. He explained the importance of establishing a common language as a whole within organizations. I appreciated his views on processes and industrialization as those are subjects I’m particularly interested in. I was glad to see we were aligned on those matters.

WordPress l’outil nocode au service de votre productivité. By @Hugopsn_ecom

Hugo Pisan

I’m a developer at heart, always had this true passion. But I respect no-code tools, I understand their usage, and I try to avoid coding bias when a no-code solution is appropriate. Sometimes, coding is too expensive, sometimes it’s not the right tool for the job. That’s why I went to see Hugo’s talk on no-code. He presented us with a few solutions and practical cases.

Our talk : Gutenberg, à la recherche de l’utilisation optimale d’un bloc face aux exigences de vos métiers. By @floriantruchot & @jdmweb

Florian and me. Picture by Rachel Peter.

We had a great time presenting this parlotte with Florian. Our preparation paid off, it went smoothly, as planned, and the planning was right. We had a good audience that, despite being a bit shy at first, opened up and asked interesting questions.

Here are the available slides (in French).

The after party

The wonderful organization team

After such a great day, we were invited to the Rotonde Stalingrad again, where the food and drinks were perfect. We had a great time together, you can always feel the pressure of the day going away, especially for the speakers, organizers, sponsors, and volunteers, whom I’d like to thank a lot for the time and effort they put into the organization.

Thanks to those people, WordCamp Paris was a success!

Improving project ReadMe

Introduction

Our developers install our projects on their local machines in order to work on them.
It happens often, that’s why it’s important to make this step as straightforward as possible.

If there aren’t any instructions, however, the developer will waste time trying to figure things out, and ask away for help.

The best place to put installation and onboarding instructions is in a readme file at the root of your project. It’s a convention commonly known to developers, that’s why we provide a readme file for every project we build.

Before

Originally, this was the Readme blueprint we used to provide by default :

## Requirements

* PHP >= 8.0
* Composer - [Install](https://getcomposer.org/doc/00-intro.md#installation-linux-unix-osx)
* node >= 16

## Installation

* Clone the git repo over ssh - `git clone git@github.com:agencewonderful/yourproject.git`
* Import a given mysql dump of the database
* Import the `web/app/languages` folder. (Folder or location should be given to you)
* Import the `web/app/uploads` folder. (Folder or location should be given to you)
* Run `composer install`
* Copy `.env.example` to `.env` and update environment variables:
    * `DB_NAME` - Database name
    * `DB_USER` - Database user
    * `DB_PASSWORD` - Database password
    * `DB_HOST` - Database host
    * `WP_ENV` - Set to environment (`development`, `staging`, `production`)
    * `WP_HOME` - Full URL to WordPress home (http://example.com)
    * `WP_SITEURL` - Full URL to WordPress including subdirectory (http://example.com/wp)
    * `AUTH_KEY`, `SECURE_AUTH_KEY`, `LOGGED_IN_KEY`, `NONCE_KEY`, `AUTH_SALT`, `SECURE_AUTH_SALT`, `LOGGED_IN_SALT`, `NONCE_SALT` - Generate with [wp-cli-dotenv-command](https://github.com/aaemnnosttv/wp-cli-dotenv-command) or from the [Roots WordPress Salt Generator](https://roots.io/salts.html)
* Set your site vhost document root to `/path/to/site/web/`
* Run `npm install`
* Run `npm run sprites` Once upon first install, then once every time you add an icon to the svg folder
* Run `npm run build` to build once.
* run `npm run watch` to launch the watcher
* Access WP admin at `http://example.com/wp/wp-admin`

It contained two sections. One for the environment requirements, and one for the installation steps.

It was a good starting point, but because our developers kept asking us recurring questions, we noticed it was still missing key points.

What makes a good Readme file?

This is the question I asked myself. I gathered all the questions I got from developers and simulated the different steps they had to go through before actually coding.

Here’s a summary of different notes I came up with on Twitter :

I then deepened each part, which gave me enough vision and quality content to do a few things:

  • First, it allowed me to write a whole chapter in my dev team productivity course about what makes an efficient project installation. If you’re interested in the subject, I engage you to have a look.
  • Thanks to that, I came up with a new Readme blueprint, that is presented and explained in the course chapter too.
  • Finally, I applied the new Readme layout to Wonderful’s default Readme file, which I’ll share with you below.

After

Based on this reflection, here’s how we reworked our default Wonderful Readme blueprint :

# Project Installation

## Environment 

- This site is set to be run on PHP 8.0 and node 16.
- It requires composer. [Install](https://getcomposer.org/doc/00-intro.md#installation-linux-unix-osx)
- The technical environment prerequisites can be found on [this page](https://www.wonderwp.com.wdf-02.ovea.com/doc/DevOps/Server_Config.html#page_WDF-02)

## Access 

- This project's files are hosted on a GitHub repository accessed here : **`[your_repo_url_here]`**
- You'll need read or write permissions to access the repository files. Ask Jeremy Desvaux or Marc Lafay for an access.
- Clone the git repo over ssh - **`git clone git@github.com:agencewonderful/yourproject.git`**
- A database dump is required, it can be downloaded from the staging or production environment, or should be given to you.
- Import the `web/app/languages` folder. (Folder or location should be given to you)
- Import the `web/app/uploads` folder. (Folder or location should be given to you)

## Configuration 

Once the project has been installed, you'll need to go over the following steps :  

* Imagine a local url for your website. (ex **`http://local.example.com`**)
* Create a virtual host in your development environment for this URL, then point its document root to the `web` folder
* Run `composer install`
* Copy `.env.example` to `.env` and update environment variables:
    * `DB_NAME` - Database name
    * `DB_USER` - Your development environment database user 
    * `DB_PASSWORD` - Your development environment database password
    * `DB_HOST` - Your development environment database host
    * `WP_ENV` - Set to environment. Can be either `development`, or `staging`, or `production`.
    * `WP_HOME` - Full URL to WordPress home local url (**`http://local.example.com`**)
    * `WP_SITEURL` - Should be : `"${WP_HOME}/wp"`
    * `AUTH_KEY`, `SECURE_AUTH_KEY`, `LOGGED_IN_KEY`, `NONCE_KEY`, `AUTH_SALT`, `SECURE_AUTH_SALT`, `LOGGED_IN_SALT`, `NONCE_SALT` - Generate with [wp-cli-dotenv-command](https://github.com/aaemnnosttv/wp-cli-dotenv-command) or from the [Roots WordPress Salt Generator](https://roots.io/salts.html)
* Run `npm install`
* Run `npm run sprites` Once upon first install, then once every time you add an icon to the svg folder
* Run `npm run build` to build once, or run `npm run watch` to launch the watcher
* Additional commands can be available in the main `package.json` file.

## Run 

Given your development environment is running:

- You can view this website by accessing its local url at **`http://example.com/`**.
- You can access the admin at **`http://example.com/wp/wp-admin`**

## Contribution 

### GitFlow
We'll be running this project with the following GitFlow configuration :

- Production branch : `main`
- Staging branch : `develop`
- Feature branch prefix : `feature/`
- Release branch prefix : `release/`
- Hotfix branch prefix : `hotfix/`

### Branching process

- The `main` branch represents what's currently in **production**.
- The `develop`branch represents what's currently in **staging**.
- To propose a feature, open a GitFlow feature branch originating from develop. Ideally, create one feature branch per feature.
- No direct merge on develop nor main are allowed : pull requests are mandatory to merge a feature back to develop.
- Before opening a pull request : merge the develop branch on the feature branch and solve any eventual merge conflict.
- Once the code review is ready for merge : use the squash and merge strategy to merge the PR into the develop branch, then delete the feature branch.

### Commit conventions

Commits must follow the [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) convention.

#### TL;DR

The commit message should be structured as follows:

```
(app part/scope): 

[optional body]

[optional footer(s)]
```

The commit contains the following structural elements, to communicate intent to the consumers of your library:

- **fix**: a commit of the type `fix` patches a bug in your codebase (this correlates with PATCH in Semantic Versioning).
- **feat**: a commit of the type `feat` introduces a new feature to the codebase (this correlates with MINOR in Semantic Versioning).
- **BREAKING CHANGE**: a commit that has a footer `BREAKING CHANGE`:, or appends a ! after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type.
- _types_ other than `fix` and `feat` are allowed, for example @commitlint/config-conventional (based on the the Angular convention) recommends `build`, `chore`, `ci`, `docs`, `style`, `refactor`, `perf`, `test`, and others.
- _footers_ other than BREAKING CHANGE:  may be provided and follow a convention similar to git trailer format.


## Deployment

- Deployment is automated via a Jenkins CI pipeline

As you can see, it’s now much bigger, but not just for the sake of it: it’s more thorough and more organized. With content broken down into more sections, it provides much more and clearer information.

The different parts try to explain each step chronologically, and try to reassure the developer, to give them the best experience possible by trying to answer as many questions that could arise in advance.

It would now be nice to test the new blueprint with them and evaluate it after a while. The idea is to keep on improving our developer’s experience. We’ll ask them some new questions in a while to see how it develops over time, and we’ll keep our ears open for new eventual questions. And you? What do you think of the new format? Let me know on Twitter with the hashtag #devteamproductivitycourse.

How to get the best out of your development team.

A software productivity course, by technical director Jeremy Desvaux de Marigny

What is this course about?

Productivity is a challenging topic. As such, improving a software or web development team’s performance is arduous, especially when starting from scratch. You might not know where to start? You might have velocity issues, or organizational problems. You might loose time over the same challenges over and over again. You might lack vision on your production, or may also have too little time to tackle those subjects. Let’s change that.

How this online course could help you :

Sounds good? Let’s start to get the best out of your development team.

Content

Part 1 – Early industrialization challenges

We’ll start with fundamentals, quick wins, low-hanging fruits, and simple decisions. It should put you on the right track, help you build up good habits from your first project to the next ones, with minimum to reasonable investments. In this chapter, we’ll talk mostly about the work environment, a proper documentation strategy, and the collaboration process.

Part 2 – Medium production challenges

Once you’ve built up solid foundations, we’ll look at more advanced challenges to help you get at ease while producing and managing several websites per month. We’ll also cover areas just outside the production itself, such as automation, maintenance, and security.

Part 3 – High production challenges

Still strengthening the way our team operates, we will discuss how we can ensure it can handle high production demands, high traffic & scalability, monitoring & observability. We will pursue the best possible effectiveness at every level.

What can I expect to get out of this course?

Thanks to the technics and tips provided in each chapter, and the diversity of the covered topics, you should notice a variety of improvements about the way your team operates, such as :

  • Better written communication
  • Better transversal knowledge sharing
  • Bus factor reduction
  • Training time reduction
  • Production time reduction
  • Focus on added value instead of losing time on fundamentals
  • Code quality improvements
  • Trust gains
  • Improved team efficiency

Who is this course for?

This course being about driving a software development team towards its best expression, it is therefore aimed at people who have some sort of leadership over such teams.

This can include for example :

  • Lead developers
  • Executives, that would like to get a better grasp of the strategy behind the topic
  • Managers
  • Senior developers, that would like to expand their area of expertise

About the author

Hello! Thank you for your interest in my work. My name is Jeremy. I like to analyze how we work as a team, and then eliminate friction points or unnecessary cognitive load on people.

As a consequence, I developed a big interest in industrialization, automation techniques, and processes. I love when my work allows the team to focus essentially on added value in front of a given problem instead of losing time on parts that should be taken for granted in the long run. It’s such an exciting subject!

I’ve created this course to put down in words a condensed version of the last decade of my work. I aim to give you a clear vision of what you can achieve and how, then to enable you to get results with your own team, within your own context, at your own pace.

Jeremy's portrait
  • JeremyDesvaux de Marigny
  • Technical Director
  • “I’ve had the chance to work for some brilliant agencies in London, Sydney, Lyon, and Montpellier. I’ve been leading technical teams for 10 years, and I’m currently managing the web production at Wonderful.”

Access the online course

The course is due to be released at the end of 2022. If you can’t wait, you can join the early access mailing list right away. (This is free and does not engage you to buy the course).

As a thank you, I’ll send you a printable roadmap to give you right away a vision of the path we will follow and all the topics we will cover. This should already point you in the right direction. Furthermore, I will also send you updates on the chapters I’ve finished writing along the way to keep you posted and deliver useful tips along the way.

I truly hope you’ll like the course. Feel free to get in touch via mail, LinkedIn or Twitter to discuss it together or if you have any questions. You can also use the hashtag #devteamproductivitycourse to get the conversation going.

Join the mailing list

Subscribe now and receive a printable roadmap, plus course updates.

    Launching an online course

    Motivation

    Teaching and learning

    I’m passionate about the web. It started out as a hobby, I’ve then had the chance to transform it into a full-time job more than 10 years ago. Building for the web is exciting. If you have a computer at hand, within minutes you could be building your first webpage. You change a few lines in a file, you refresh your browser, and you immediately see the result, that’s thrilling. I still appreciate this feeling even after all these years as a developer.

    Curiosity and excitement are a start, but sometimes you get stuck and need help. The second huge benefit you get from working in the web industry is the community. There are so many great and inspiring people out there, sharing their knowledge, replying in forums, streaming, blogging, creating open-source tools, it’s vibrant! I always loved that, and I benefited so much from it that I would like to thank every person who helped me out directly or indirectly for the past 15 years. Those people have always been my models, and I’ve always wanted to be like them.

    That explains why I’m reasonably active online, outside of my day-to-day job. I write articles on my blog to express myself on subjects I’d like to discuss. I often have an eye on Twitter, where I both read the tweets or engage with the community. I go to local meetups, where I sometimes speak. I’ve given interventions at schools in the past, and I’d like to do some new ones again soon if I get the opportunity. I’ve joined technical slacks where I like to exchange with fellow developers and tech leaders.

    I do so because it’s rich and lively. I’ve gained so much by exchanging with peers, sometimes by listening, sometimes by debating, sometimes by showing, sometimes by teaching. All these actions had positive effects on me and made me intellectually richer.

    Learning, building, and exchanging with the community, are key pillars of my motivation and interest in the web industry. That’s the reason why I wanted to work on a new project, which would tick all those boxes, which would be about teaching online. That’s where the idea of creating an online course first came to me. But I wasn’t sure at this point.

    Stepping out of the comfort zone

    Having to handle a situation you don’t master is certainly uncomfortable at first. It forces you to learn, to adapt. But past this stage, it can also be very rewarding in many ways, for example with knowledge gains, self-esteem, recognition, etc.

    As a team lead, I’ve always developed a great interest in optimizing the way a production team operates. It’s very interesting, especially when you get the chance to start from nearly scratch. My last experience on the subject at Wonderful has been very successful, and I have a thankful thought for every person who trusted me for driving this project for the last few years. Now that this work has been conducted, the team produces like clockwork, my expertise on improving the team productivity is less required. I’ve recently felt right in the middle of my comfort zone, but not really in a positive way.

    I needed a shake, a boost, a new challenge, which would force me to learn, unlearn, relearn if need be. This new point gave some more weight towards the idea of creating an educational project. I would have to research, I would have to train on areas I don’t know, like marketing for example. I would have to write or speak or maybe produce videos, grow an audience, put my work in front of people I don’t know. Maybe they’ll like it, maybe they will not. I’ll certainly have to challenge what I think I know, or what I think is best. That’s really a shake. That feels a lot like stepping right out of the comfort zone and I like the idea.

    The content creator boom

    We have a lot of trending topics in the web community. Sometimes it’s WordPress, sometimes it’s a JS framework, sometimes it’s the Open Source world, etc. Twitter gets crazy over it for a few months, then the next topic comes along, bringing its whole lot of arguments for or against. To me, this shows the community is dynamic and endlessly creating value and possibilities.

    Recently two subjects seem to be trending: crypto and the creator economy. As you can guess, the second subject is more within the scope of this article.

    The creator economy seems booming at the moment, and not just within the web industry. We see content creators everywhere online, on social media, youtube, twitch, Instagram, tik tok, dedicated platforms. If anyone wants to get started at creating content online, he or she can have access to plenty of mature tools and platforms to do so and that’s a real chance.

    I didn’t really notice the movement at first, then I realized many of the people I follow were becoming content creators either full-time or part-time. Some were creating videos courses on egghead.io, some were streaming on twitch or youtube, some were creating online courses on custom-made websites, some were literally building platforms for other content creators.

    Here’s a few of them out of my head if you’re interested :

    I follow these persons because I admire their work, and now they’re also an inspiration to believe in myself to launch this project.

    Diversifying my revenue model

    From the beginning of my career, I’ve always been a full-time salaried employee. That’s a pretty straightforward revenue model: you exchange a certain amount of hours of work against a given salary. This model suited me well, I didn’t really think much about it until recently.

    I’m now 34, married to the love of my life, proud dad of two kids, owner of a lovely house outside of Montpellier in the south of France. Life is sweet overall, I don’t really need much at the moment, but some questions arise. Will it always be the case? Probably not. Kids will grow up, they’ll need to go to university at some point. Life expenses go up via inflation, salary sometimes does not follow immediately. What if something goes wrong with my current employer?

    It’s not that I’m worried, it’s more about resilience, trying to imagine a more robust revenue model that would not entirely rely on the relationship I have with my employer. It’s also related to investing in yourself, which, if I’m honest, is the only thing I’m able to invest on based on my current revenue model. So investing on my knowledge to create an online resource people would be interested in buying would be a great achievement, in itself, but also one towards a more diverse income scheme.

    I was dubious I could pull this off from within my existing work/life balance too. I’m working 39 hours per week, and have a demanding personal planning outside of that with the family. It seemed very very hard at first to squeeze enough quality time in the week to work seriously on this new project. That’s why I reached out to my boss, and I presented him the project, the reasons behind it, the benefits it would have on me, and by rebound hopefully on Wonderful. He took it with great wisdom, and vision, and freed me 3 hours of my work time per week to work on the project. And I would like to thank him personally for that.

    Choosing a subject

    If we take a step back, so far we’ve discussed teaching, learning, becoming a creator, and gaining some income from it. That sounds like a great but ambitious plan, but at least it sounded like a plan. A plan I took the time to mature and consolidate. One I would feel motivated to put the necessary efforts in to realize. One I’m at peace with: I’m going to create an online course.

    But soon enough, crucial questions come into play. What should I talk about? What would the subject be? What would the format be? I can talk about many subjects, but then surely I can’t teach that many well enough, and even, I wouldn’t feel legitimate to teach them at all.

    It took me some time to think through the fact that in the end, I only felt legitimate enough about one subject: the work I’ve been doing for the last 10 years: improving development teams’ organization and productivity.

    Figuring out the table of content

    At this step again, I was at peace with the project direction. I was also convinced by the reason why I wanted to do it, and by my subject. But like I said in the previous paragraph, that’s what I’ve been doing for the past ten years, that’s a lot to discuss, so I had to figure how to cut and organize my thoughts into a clear and educational roadmap.

    I started with the brain dump technique, where I wrote into a file everything I could think of. This ended up being a big list. But at least it was a start and a concrete base I could work with. I reorganized items, grouped some, deleted some. I saw that some were potential subjects, some were more chapter titles, some were more content that should be inside a chapter instead of being their own one, and I carried on reworking the file until I had something manageable and teachable over a reasonable period.

    I ended up with three parts of roughly 10 chapters in each. That sounded coherent to me.

    Choosing the format

    At this point, the project is becoming clearer to me. I know what I can talk about. But so far I still don’t know how I will do so. Will it be a video course? Will it be a hosted online course? Will it be an email sequence course? A custom website course? There are many choices.

    • The Video course: I eliminated the video format for many reasons: I don’t have any equipment for this, I don’t have a proper recording space at home, most importantly I don’t have the recording or editing skills to make professional-looking videos. Nothing unconquerable, but that would mean adding more investment and more risks to my project, which I decided not to do.
    • The email course: The email course format didn’t seem very appropriate either, mainly considering the table of content I imagined. 30 chapters sent in your inbox could be a lot of emails to receive, but also to read, digest, keep for future reference, etc.
    • Online course platforms: I then turned to dedicated online course platforms. I compared a few of them, but couldn’t get convinced by one in particular. Either the course presentation was not great, sometimes it was expensive up front, sometimes it required me to work on a file download instead of a web format, sometimes your course would be drowned under a sea of diverse other courses. I could not find the perfect candidate mainly because I had a set of requirements in mind that I couldn’t match with a particular platform, not because the platforms were not good.
    • A custom website: I’m a developer so I can build the website myself if I needed, but doing so would require a lot of time if I had to build everything entirely, especially the membership part, with the payment handling and all.

    So what I’ve decided to do, is to host the course on my personal website to start with, which would give me the freedom to create and write the course as I wish. And then I would delegate the membership part to a creator platform.

    So as you can see, I’ve settled for a solution that is a mix between the last two points: a dedicated website + a creator platform.

    I’ve chosen gumroad for several reasons :

    • It has a free mode which allowed me to deeply test what the app can do :
      • I tested the product creation
      • I then embedded it in my site
      • I then faked a purchase
      • I controlled the confirmation email I got
      • I liked the process and all went well
    • It can generate unique license numbers per purchase. That’s great for the course authentication on my site
    • It has an API to control license numbers. Another great part for controlling access to the course.

    I’ve then coded the communication between my site and gumroad, on the API part to protect the course access, and with their embed code to present my products.

    After a few days of work, I finished the technical aspects, which meant I was ready to host it.

    Conclusion

    Based on the described preliminary work, I’ve decided to build an online course. This certainly won’t be easy, and that’s why I’m also preparing a series of articles explaining my journey as a course creator.

    I’ve already started to write the first chapters of the course content. But then I’ve decided to put the writing on hold, because of a reason I’ve read while doing some new research: attention to marketing is crucial to the success of such projects. So we’ll see in the next post of this series how I approached the course marketing.

    How we made a php production team adopt Gutenberg

    Reaching the limit of wysiwyg

    I’ve started working with WordPress in 2009, and it followed me quite closely throughout the last 11 years. During the first 8 or 9 of them, be it me, my colleagues, or most of the community working with this CMS, we did so in PHP. Altogether, we were part of this huge ecosystem made of WordPress profesionnals that built themes and plugins for it, and mostly all of that written in PHP.

    Then Gutenberg came along, and for the right reasons in my opinion. The old editor had reached its limits. As an agency building premium websites for our customers, we felt limited by it, and so were our clients. It had reached a point where they couldn’t get high quality layouts without writing HTML themselves or composing with uncomfortable trickery. Well, it had made its time and it was time for a change. And ho boy what a change it was about to be.

    First Gutenberg encounter

    I remember the excitement I felt when I saw the first Gutenberg demo. It was even mounted on a front office environment, which let us think we might be able to offer live content editing on a page too in the future. Sadly that was just for the demo, Gutenberg is still only a back office editor for now.

    Then came the Gutenberg plugin that let us work with it even though it was not part of the WordPress core yet. It was time for a first encounter, and honestly I was shocked. I was shocked because I wasn’t ready, I was shocked because I was surprised, I was shocked because it felt so different, I was shocked because it was hard. I was close to call it a treason. Why ? Because I felt a rupture with why I think WordPress is awesome.

    WordPress is in my opinion a very open platform, partly thanks to its hooks and template hierarchy mechanism. Thanks to those two things, you can expand the CMS or rewrite parts of it in really great extends. For example you can use many different rendering methods in the front end, be it the loop by default, or MVC with frameworks, or twig / timber rendering if you want. It’s so open and so many people have produced resources to work with it that you can choose an approach that truly fits the way you’d like to work with it.

    At first, I felt we’d lost that with the introduction of Gutenberg. Suddenly I felt I didn’t have the choice anymore, that someone told me and the rest of us : “Learn JavaScript deeply because that’s what the cool kids do now, and we want to be cool again”.

    I did think at first that it was a hype driven development and elitist move forcing the use of React upon a php based community. A move that would exclude so many people from the game when WordPress usually tries to get the most on board. I saw a risky and unstable move embracing a technology evolving a high speed, with the frequent introduction of major or breaking changes in its core API, which contrasts with the huge retro compatibility efforts on the PHP core for example.

    Unsurprisingly I wasn’t alone feeling this. Many of the WordPress pros I know where shook by this new way of approaching our work.

    But don’t get me wrong, I’m not here to bash on Gutenberg. Those last lines summarize a first impression, a first negative impression that needed more time and understanding of the product to mature, that’s why I didn’t share it that much at the time.

    Embracing change

    Despite my first skepticism regarding the development experience, Gutenberg was going to stay, and was going to be a center piece of the platform I was working the most with, so I had two options : jumping on the train, or stay grumpy on the side.

    Furthermore the development experience was not the only thing to take into consideration : what about the webmaster experience? What about the writer experience?

    I’ve tried both and I had to admit Gutenberg was a huge improvement compared to what the situation was before it. Writing was smoother, and also quicker. Same for when I put the webmaster hat on : I was able to quickly mount pages, sometimes with complex layouts. The presentation was clear, it distinguished content from the structure, I could rapidly see how the page was mounted, find the area I would want to edit. It felt cleaner with no more HTML visible on the page, just blocks. We awaited an editing improvement and that was clearly the case, it’s a great tool to create content and lay it out, and those are two very good reasons to make a development effort to offer our customers the best editing experience we could.

    How can we close such a gap

    Let’s summarize the situation we’re in. On one side we’ve got a php production team, with php processes, php plugins, php components accumulated over the years, and on the other hand we’ve got a brand new editor, full of possibilities, promising to break the edition barrier, but requires you to code and even render in react. That made the team defiant.

    That’s quite a gap, but the questions are : are those two worlds really incompatible? Do we have to throw away our capitalized efforts in favor of a shiny new tool? Is there a pragmatic migration path ?

    First point : server side opening

    Remember when I wrote that Gutenberg was removing the luxury of the choice from developers earlier on? It turns out I was wrong (on many parts but let’s focus on this one for now). First choice you have : you can choose to prefer a server side rendering in place of the the react rendering when you register a block on the php side with the register_block_type function. Here’s a more complete example available as a gist.

    This opens up a new world of possibilities because in a sever side render callback, you have access to two crucial parameters : the attributes administered on your block, and the administered block content. You can see those two parameters documented on the WP_Block_Type::render method documentation. And honestly with those two parameters you have more than anything you need to render your blocks with a php callback, and more precisely in our case, existing php components like the ones we used to administer via shortcodes for example.

    To sum up, with a server side render callback, you can improve the admin experience by coding a react block for the back office, and potentially map the edited attributes and content to an existing php component on the front office, which smoothens the migration path.

    Second point : Migrating away from ShortCodes

    ShortCodes have been around for so long. They’ve been utterly useful to ease the administration of content inside the wysiwyg editor. They allowed us to separate the content from the HTML structure for example.

    But they have limits too, especially when trying to deal with medias for example or complexe structures (even with imbricated shortcodes).

    They were quick to create though (thus cheap in cost), and efficient in their task, and most importantly for our subject, they were mastered by everyone, whereas suddenly, we need to produce react blocks, which at the moment of writing eliminates some of our developers from the process, and it’s becoming longer for the others (thus more expansive).

    That’s why we’ve decided to invest on a solution that would equilibrate the balance : a php annotation based block abstraction mechanism. I’ll write a dedicated article on this technique soon because it’s a bit long to explain it in detail, but here’s a summary in short : placing php annotations on a php component class (such as ones we use to render content historically), as well as on its attributes, would automatically make this php based component available as a Gutenberg block in the editor, under the form of an admin form, with one field per attribute. Then the render would be delegated back to this component, with all the values for each field passed.

    Bascially it allows you to turn the following

    /**
     * @Block(title="Chiffre clé")
     *
     */
    class ChiffreCleComponent extends AbstractComponent
    {
        /**
         * @var string
         * @BlockAttributes(component="PlainText",type="string",componentAttributes={"placeholder":"Titre"})
         */
        protected $title;
    
        /**
         * @var string
         * @BlockAttributes(component="PlainText",type="string",componentAttributes={"placeholder":"Texte"})
         */
        protected $text;
    
        /**
         * @var string
         * @BlockAttributes(component="MediaUpload",type="string",componentAttributes={"size":"medium"})
         */
        protected $image;
    
    [...]

    Into this :

    TL;DR : that worked super well for low level administration needs.

    Third point : Becoming comfortable with react Gutenberg blocks in the back office

    Indeed that’s the all point of having this new editor : being able to create new blocks for it.

    I did a bit of react in the past : a side project to understand the framework, how it worked, and what coding with it felt like. But it was a long time ago and not really in depth, so I had to train again: train on react, train on Gutenberg. I read a lot of tutorials, plenty of documentation, then decided to take on a small challenge : creating a section block (the equivalent of the <section> html tag), and a container block, (which is basically a wrapper div.)

    I’ve always found it interesting to try to build something, even small, when learning a new skill : it allows you to learn by doing, and search online for ways to tackle a difficulty you encounter. In this case : focusing on those two blocks helped me to understand how a block should be registered, structured, how changes should be persisted, how the render mechanism works and so on. It was full of discovery and knowledge, and that surely helped me a lot to tackle the abstract block challenge which was a lot more technical.

    For those two blocks I opted for a react render, not a server one, it was more appropriate in this case, and it made me discover how you can render inner blocks within a block.

    It was not easy at first to build those blocks. Since then, I became more at ease with the process, and the React syntax, even though it moves quickly. But it’s still a lengthier process for us than coding in PHP, so when facing a new block opportunity, we’ve decided to question what the best way of dealing with it would be at the given time instead of having a fixed thinking. Let’s see how in the next chapter.

    Conclusion : we have an even greater choice now

    To summarize how my thoughts evolved : I was at first afraid that Gutenberg would force us into a closed path, furthermore one that was disconnected from our php reality. After investigation, it turns out it gave us an even wider set of tools to address the needs of our clients by adding new ways to work, and by not necessarily eliminating the existing ones.

    Now when we have to develop an administration brick for our clients our webmasters, we ask ourselves the following questions :

    • Is the editing feature we’re looking at complex? (For example a slider, an accordion, a set of rich panels…). And therefore does it deserve the best Gutenberg experience we can offer ?
      • If yes, we should take advantage of the editor’s possibilities, and we should code a native Gutenberg block (in react).
      • We still have the choice to render in PHP or React based on the desired output. Some of our components are even rendered in Twig.
      • It’s more difficult for us to code, but we’re improving block after block. We even abstracted some of the mechanism already to shorten the process of creating of blocks. Our tooling is in place too.
      • For the webmaster : it adds a lot of value to his experience.
      • For now, we rely on more senior profiles to produce those blocks. Those profile took the right training and are now able to create production ready Gutenberg blocks.
    • Is the editing feature we’re looking at basic? Or a shortcode improvement ?
      • If yes, a form to fill should be sufficient in the admin, therefore we’ll provide a php component, made available as a form to fill in Gutenberg thanks to annotations.
      • Thanks to the fact that the process is well documented, these blocks are very quick for the team to code and to make available in Gutenberg.
      • It’s not the full editor experience but it’s still very comfortable for a webmaster, and it’s cheap to produce.
      • Any member of the team knows how to produce such blocks and is able to do so.
    • Do we really need to code a block for this feature? Maybe a set of default Gutenberg blocks could do the trick? Or maybe a pattern ?
      • It’s worth knowing all the default blocks, what you can do with them, and what you can’t. For example, I frequently use the paragraph, columns, gallery, and media ones.

    To sump up, in front of a new block opportunity, we try to take the best decision as where to invest time and effort on blocks that are worth it, versus blocks that are not because they are more basic.

    Even though adopting Gutenberg was a bit daunting during our first encounter, the time and efforts we put on the tool analysis and the adoption path was clearly worth it. Our webmaster loves it, so do our clients. The team is onboarded, everyone understands the added value and is technically able to contribute at different scale.

    Finally, we haven’t lost our historical ecosystem, quite the contrary it’s now enhanced by a much better editing experience. Everyone wins.

    Getting started with Jenkins Blue Ocean Pipelines

    The first thing that struck me when dipping a toe into Continuous Integration is the barrier of entry that I feel is quite high. I’ve had some hard time finding information about how to to properly get on with the subject, get a grasp of the tools and the code needed. I can say it properly kicked me out of my comfort zone, especially as I was not surrounded by people who had done it before from whom I could have learned more easily. Special thanks to James Dumay for his time and help. I’ve also found this google group full of helpful people.

    Strategy

    First lesson that I learned is that it’s better to establish a strategy. Jenkins is a powerful tool that can be used to do many different things. So what do you want it to do? I did not know at the time and that diserved me, but I ended up finding one : I consider Jenkins as a team mate, whom I can ask to put our code into production (or staging) when we push code on specific branches (master and develop).

    If that were you that would be asked to push code into production manually, which steps would be required?

    • Pull the latest changes from the develop branch
    • Install composer vendors
    • Install node vendors
    • Compile assets (Sass to CSS for example), combine, minify, timestamp
    • Test that everything is all right all around
    • If yes send the code to staging
    • Migrate db changes to staging
    • Test that everything is all right all around
    • Notify project managers that the changes are online

    That’s what I’m going to ask Jenkins to do for me. That would automate all the process to push code into staging here, and apparently that is called a pipeline.

    The obvious benefit of automating this is to save the time it takes a developer to do this manually every time, but there are also some other advantages.

    Security for example, as all the data required to run these scripts would be hidden from the developers, kept secretly by Jenkins. This is especially useful if you work with remote developers for example (like freelancers that are not part of your permanent team), and you’re not at ease with the idea of giving them an ssh access to your server.

    This approach also removes the pain that could be associated by doing all those steps manually. As a result, you might get an easier on boarding from your team, and more frequent pushes to the staging environment, because in the end the developer’s job ends when he has merged his code on the develop branch.

    Delegating the knowledge to the machine is also a good knowledge capitalisation in the long run. Maybe there’s a deployment expert in your team, what will you do if this person leaves the company? Or maybe you know all the steps to deploy a project right now, but if you append to work on something else that works differently for 6 months, then go back to this project for a fix, will you still remember everything that needs to be done? What about if you delegate this fix to a new teammate? Jenkins will always remember and do all the necessary tasks in the right order. One less burden on your shoulders.

    And you? What’s your CI strategy?

    Source Control Management (SCM)

    Continous integration puts SCM at the heart of the process. It’s by doing certain actions on the SCM that you’ll trigger CI builds.

    Here again you’ve got several strategies at your disposal, we’ve chosen to use GitFlow. That’s a bit beyond the scope of this article, I’m not going to expand too much on GitFlow here, but there are nonetheless some aspects to mention.

    For us, in GitFlow, the master branch illustrates the state of what is currently on the production server, and the develop branch illustrates the state of what is currently on the staging server. When we develop a new feature, we open a feature branch that is forked from develop. We work on this branch until we are ready to merge it back to develop, by opening a pull request for instance.

    We’ve configured our GitHub repositories to send a web hook to Jenkins when a change happens to them, this way Jenkins gets notified of the change and launches a new CI build. You could also ask Jenkins to periodically poll your SCM, but that’s less efficient.

    To add this kind of webhook, go to your github repository page, then Settings / Webhooks:

    As ou can see, SCM is really at the heart of all this as CI relies on branch state and branch actions to know what to build.

    Installing blue ocean

    I’ll consider that you already have a running Jenkins instance to concentrate on the addition of blue ocean.

    Blue ocean is actually a Jenkins plugin, so to install it you can go to > Manage Jenkins > Manage Plugins > Available, then type Blue ocean inside the filter input search.

    Once you’ve located the blue ocean plugin, install it. And that is all really because Jenkins will take care of downloading all the required dependencies for you.

    Once the plugin installed and the Jenkins restarted, you should have an ‘open blue ocean’ link in the left hand side sidebar.

    The Jenkinsfile

    You didn’t need this kind of file before to make CI pipelines with Jenkins, this is new. This file allows you to declare a pipeline directly in your project instead of declaring it by configuring it inside jenkins.

    By default, Jenkins looks for a Jenkinsfile directly at the root of your project files, but you can tell him to look elsewhere.

    Depending on the kind of job you’ll declare later on in Jenkins, this file could also serve as a trigger to create jobs automatically, from within GitHub organizations for example.

    The pipeline

    Like we’ve said earlier, our aim is to automate different tasks that we repeatedly do when pushing code online.

    Here’s the essence the pipeline script that I usually use:

    pipeline {
      agent any
      stages {
        stage('Build') {
          steps {
            script{
            	defineVariables();
    
                echo "Starting Build #${env.BUILD_ID}, triggered by $BRANCH_NAME";
    
                if(env.runComposer=='true'){
                    try {
    	                sh 'composer install --no-dev --prefer-dist';
                    } catch(exc){
                        handleException('Composer install failed', exc);
                    }
    	        } else {
    	        	echo 'skipped composer install';
    	        }
    
    	        if(env.runNpm=='true'){
    	            try {
    	        	    sh 'npm install';
                    } catch(exc){
                        handleException('npm install failed',exc);
                    }
    	        } else {
    	        	echo 'skipped npm install';
    	        }
    
                if(env.runBuild=='true'){
    	            try {
                        sh 'npm run sprites';
                        if(BRANCH_NAME=='master'){
                            sh 'npm run build:prod';
                        } else {
                            sh 'npm run build';
                        }
                    } catch(exc){
                        handleException('Building the front failed',exc);
                    }
                } else {
                	echo 'skipped npm sprites & build';
                }
            }
          }
        }
        stage('Deploy') {
            steps {
                script {
                	try{
    	                echo "Deploying $BRANCH_NAME branch"
    	                def creds = loadCreds("livraison_occitanie_${BRANCH_NAME}_credentials");
    	                deployCode(creds);
    	                finalizeDistantMigration(creds);
    	            } catch(exc){
    	            	  handleException("The $BRANCH_NAME branch deployment failed",exc);
                  }
                }
            }
        }
        stage('Integration tests') {
            steps {
                script {
                    try {
                        if(env.runCypress=='true'){
                            def host = '';
                            if(env.siteUrl){
                                host = env.siteUrl;
                                echo "Starting integration tests on $host"
                                sh "cypress run --env host=$host"
                            } else {
                                echo 'No host defined to run cypress against';
                            }
                        } else {
                            echo 'Skipped integration tests'
                        }
    	            } catch(exc){
    	            	handleException("Cypress tests failed, which means you have a problem on your $BRANCH_NAME live environment",exc);
                    }
                }
            }
        }
        stage('notify'){
            steps {
                script {
        			    notify(env.slackMsg,env.slackColor);
                }
            }
        }
      }
    }

    I’ve omitted function declarations for the sake of clarity, but you can download the full script here. Let’s explain what it does in a bit more details, but remember, I’m in now way an expert, this is truly experimental. That works for my needs though, so let’s see if it can be useful for you as well.

    Local build

    The first big step is to have Jenkins create a local build for him. In a GitHub multi branch pipeline, to do so it first pulls the latest changes from your source control. I realized that’s an important thing to know, because it implies that Jenkins needs to have enough space to do so. And the more branches you build, the more space you need. Some other CI tools do not work like this by pulling the code first, they’re just scripting tools. But in a GitHub multi branch pipeline, Jenkins pulls the code first. That’s probably a reason why there are so many cloud based CI services online actually.

    Once the code has been pulled, I ask him to log a few messages with information about the build. You can find the list of accessible variables from inside a job there: Pipeline syntax > Global variable reference.

    Then I run a composer install and a npm install. I’ve had troubles with these instructions when I started, because I realized that it depends on the capacities of the agent in which you choose to run your pipeline. So it worked on my machine, but not online because my online Jenkins didn’t have natively access to those tools. This is because the agent I have chosen is ‘any’. You could choose a docker agent instead, if so make sure this docker agent has the right capabilities you then use in your pipeline script.

    Tests

    The thing I realised with tests is that the build that Jenkins creates while running a pipeline should be considered as autonomous and disposable. In other words you should be able to recreate the entire environment with code and database, run your tests against it, and if it works carry on with the pipeline, to eventually destroy the entire local build at the end, hence the autonomous and disposable. If your unit tests require a complete database bootstrapping for example, you should make sure that your pipeline is able to recreate the complete environment for your tests to run correctly, and I find that this is not an easy task.

    Once your environment is ready to run your tests, you can run many different types of tests against it, in sequence or in parallel. For example, you could run unit tests first, then a set of parallel browser tests, and if all goes well, let the pipeline carry on.

    I’ve tried a unit test setup with phpunit, and Jenkins is able to understand your test report directly out of the box and abort the build if tests do not pass. If you want to produce some code coverage stats, it won’t work unless you have on your Jenkins machine a tool that can do, such as Xdebug for example.

    Delivery

    In my strategy, I imagined to deploy the code the the remote environment by using rsync. I know how to use this command, that wasn’t really the issue here, it was more how to handle credentials safely. I didn’t feel like writing the complete rsync command along with the user authentication in it directly in a script that is versioned inside the project (remember the security considerations we evoked at the beginning). Furthermore, the user used by the command changes depending if I need to deploy the code to staging or production environment.

    That’s when I learned about credentials. With credentials you can store sensitive information in different format against an id you can use to retrieve that information from within the pipeline. What I like to do is to create a JSON file where I put all the information I need for the build, then store this file as a secret file credential. Inside the pipeline I load up this file and have access to all the secret login, passwords, db names, path…

    Also bear in mind that in this kind of setup it’s the Jenkins machine that works for you and will execute the commands you want in a non-interactive mode. That means you won’t have the opportunity to enter parameters by hand along the way. So you need to parameterize your commands, and make sure no password prompt will show up during a command. This is usually avoided by authorizing the Jenkins machine on the remote host via an ssh key.

    Notification

    If you don’t want to open up your Jenkins every time you push code on a branch wondering if the build is running or not, it’s nice to have a notification mechanism in place. You could send notifications when a build starts, fails, succeeds, and this way everyone that is interested in the project could follow what’s going on.

    Enabling a slack notification once every build is finished has had a very positive impact on my dev teams, but also beyond that, on project managers and POs for example. They found it quite useful to follow what was pushed on the staging environment along the way, it was a good informal communication from the developer to them that let them know the work is progressing. (Side note, this requires the slack notification plugin).

    Closing thoughs on the pipeline script

    I know this script is far from perfect, there are a number of things that bother me, for example there’s a bit of duplicate code, and I’d like the script to send a slack notification when the build fails. I’d also like to launch certain actions only if a particular file changed, like composer install only if composer.lock changed for example, I’d like to have near 0 downtime deploys with current folders symlinked and so on.

    But I’m also pleased that it does perform the tasks I wanted to automate. It’s able to create its own build on its own, it’s able to deploy the code remotely by looking up credentials securely, it’s able to perform some db changes, and it notifies in slack when a build’s online, and so far that’s been a great achievement for me knowing the fact that I’m not a sysadmin and that I’m not a CI expert at all. I figured that with CI, it’s better to start small and iterate than to imagine the ultimate pipeline right away. I wasn’t really confident but I tried a few things out, figured out pitfalls, made some baby steps, and eventually got something useful working.

    I’d love if a proficient jenkins pipeline user would give me some advice on the points to improve. In the meantime you can download the entire script here.

    If you’re looking at a good resource to learn the pipeline syntax, I recommend this one : Declarative Pipeline With Jenkins

    Declaring a new multibranch pipeline in jenkins

    Having a new Jenkinsfile ready to be read by Jenkins in your project is good, but not enough, you also need to create a new Jenkins project to associate your dev project.

    To do so, go to new Item, and set the project type to multi branch pipeline.

    Remember that SCM is at the core of this multi branch pipeline concept, so that’s not a surprise to see an SCM source settings here next. In the branch source section, connect your SCM source. I tend to use GitHub so that’s what I choose in the dropdown. This has for effect to add a GitHub settings panel to fill up.

    One of the thing it asks you is to attach a credential here, I’ve therefore created a credential with my GitHub account information so Jenkins can use that to connect to my account on my behalf to retrieve the list of all the repositories I have access to, be it public or private. The owner input is the GitHub owner, yourself or your organisation most probably.

    Once your credential and owner filled, you should have access to all of your repositories in the Repository dropdown, so pick your project with the Jenkinsfile inside.

    By default, Jenkins will clone every branch you push to your remote repository to its local machine, and run the Jenkinsfile on it. This could require a very good machine if you have many branches to build, and / or many projects managed by Jenkins. I wasn’t really aware of that before I started CI. Cloning locally many different projects could use up a huge amount of disk space. So I’ve instructed Jenkins to only build certain branch patterns automatically (The filter by name (with regular expression) settings) , and to only keep a limited number of builds per project, and within a limited period of time (Days to keep old items and Max # of old items to keep).

    So as you can see, even with a limited number of settings, you can get your new job running quite rapidly.

    Now that you have your Jenkinsfile in your project, and a Jenkins job configured properly, (and the github webhook we talked about earlier setup properly), once you start pushing code to your remote repository, Jenkins should be listening for the push event to grab your code, and run your pipeline script against it.

    Conclusion

    I find the new blue ocean UI very very nice compared to the ageing previous UI. It’s fresh and bright, feels flat and modern, but it’s not just a pretty face. The new UX is nice as well.

    For example, you can browse the list of all your pipelines, put the ones you care the most about at the moment in favourites which is nice because it puts them at the top of your pipeline list.

    When viewing the job detail page, the new UI looks so much better than the legacy view! You can see all the pipeline steps, and if anything goes wrong the view will point you to the issue right away, saving you from browsing 3 kilometres long logs.

    The new pipeline syntax is more declarative, and therefore might require more work from the developer to master its syntax, and much time from him to implement what he wants. But once you have a solid pipeline running, the gain of time and energy in the long run is definitely worth it.

    I’d always be thankful to my loyal Jenkins for helping me pushing my team’s code online every time we send a few commits, and I’m delighted to use this stable and nice looking tool daily.

    I hope this feedback on my personal experience will be useful to any fellow developer out there. Good luck setting up your own pipelines! Let me know what you’ve built in the comments.

    jdmweb