Hello, we are OceanoBe, and this is our playbook. In the following pages we detail how we work and deliver digital products of all scales and how we can help you and your team be successful in achieving your goals.
Thanks to our expertise in various industries and with clients of different scale we came up with a set of rules that we stick to in order to deliver high quality products.
Please keep in mind that this is a living document and it is constantly reviewed and improved by our company’s employees. Like in our day-to-day jobs, we always strive to become better at what we do and how we do things, therefore documenting the processes and principles that make us better at what we do and help us make this a repeatable process that ensures we deliver added value to our clients with every engagement.
Building quality digital solutions has at its core one important aspect: understanding - deciding what to build. Our team likes to take its time and get to the heart of our clients business, its problems and challenges that are faced by their employees, customers and users.
By doing so we can more easily determine through the eyes of our clients users and customers if the problems that we are trying to solve are worth the effort and the resources.
Setting this stage enables us to deliver quality, innovative solutions to our clients challenges.
The sprint is a five-day process for answering critical business questions through design, prototyping, and testing ideas with customers.
Developed at GV, it’s a “greatest hits” of business strategy, innovation, behavior science, design thinking, and more — packaged into a battle-tested process that any team can use.
The design sprint gives teams a super-power: fast-forwarding into the future to see if a solution is worth their time and effort and gather customer reactions before they invest heavy resources into delivering them.
Before the five-day process can begin, we have to prepare for everything ahead of us. As we all know, coming up with innovative solutions requires thoughtful investigation, creativity and making decisions based on desired outcomes and customer reactions.
For this to happen, the team needs to be focused on trying to come up with solutions to the problem ahead of them, so here’s a list of things to consider before beginning the process:
1. Define the problem you try to solve
2. Gather a team of 3-5 people to work towards a solution and a decider
3. Have a facilitator from the team
4. Timebox everything
5. No distractions (other meetings, etc)
6. Bring sticky notes, paper and markers
7. Have a big whiteboard
8. Tape for putting sketches on the walls
9. Ask the client ahead to bring in 5 people that are in the target group for some user interviews.
The goal of the first day is to deeply understand the problem from the point of view of those who are using the product. The main goal is to empathise with the end user.
It all starts with the clients pitching the product and or problem that users are facing to the design sprint team. We then try and gather as much information about the challenge as possible:
1. Why is this a problem? Why does this product need to exist?
2. Who is the user and what defines him as an individual?
3. What other solutions are there and what have you tried so far?
4. Ask the client experts in this domain everything they know about this problem.
5. Define a domain vocabulary and explain what every domain specific word means.
By the end of the day we should be able to respond to the following question:
Why should the customer use our product and what does he expect from it?
As you are doing this try to take notes, map out a “happy” user journey, focus on one persona. In a nutshell, gather as much information as possible on the problem to be able to focus on the critical part of the problem. Define for whom you are solving this problem.
Following a day in which we did our best to understand the problem and identify our target persona, we recommend our clients to find at least 5 users that fit into the persona we defined the day before. We kindly ask them to screen the people that will be brought in on Friday to make sure they fit into that persona by asking questions derived from the user persona.
The team starts working on solutions for the problem we decided we want to resolve the following day. We do this by doing some lightning demos where each team member demos an existing solution to a similar problem for 3 to 5 minutes. Make sure to timebox the demos and to take notes on a whiteboard about every solution presented.
Next, each team member will have to sketch their own solution and make use of the notes, points the team found interesting and useful and of course, that help towards solving the problem.
Each sketch should be anonymous so everyone should try and make it as self-explanatory as possible: choose words carefully, make sure to use self-explanatory titles, don’t worry at all if it’s scrappy or ugly.
By the third day you should have a couple of designs that are tackling the problem you and your team decided to solve. Sadly, you can’t prototype and test each one of them so the team should put together a solid solution that has the most chances of success.
Each member of the team should start by analyzing each solution in silence that should be on the wall by now and take notes about each one. Afterwards, each member should present their opinions and the team should vote for the most liked solution. The decider can weigh in at this point in case of a tie. The solution can be improved with different features/ ways of doing things from the other ones, for example, one solution might provide a more elegant search solution.
Put all together in a storyboard to follow the next day.
On the fourth day we should create a prototype to get in the hands of users the next day. It sounds intimidating to create this in just one day but there’s no need to be scared, as we did the most important decisions up until now.
Use a tool such as Marvel or Invision to create a realistically looking prototype to perform user tests on it. Divide responsibilities between team members to be as effective as possible: how does what screen, who does the user copy, who’s responsible for providing images, etc. Build enough quality into the prototype so you can just learn from it.
The last step of the design sprint is the user testing of our prototype. Just by looking back at what we achieved in the past days we can be proud of the progress we have made so far and we’ll take it one step further by interviewing five users using our prototype.
The end goal is to learn, so make sure the users feel comfortable doing this. You can set up some cameras to observe how they use the prototype and what their reactions are, remind them that some of the things might not work in the prototype, ask them to think out loud as they use the prototype.
Ask your users to perform a specific task and see how they manage, ask them follow-up questions, take notes as you move along and make sure to start from something simple and move up to more complicated tasks. Review the learnings with your team, look for positive and negative patterns and see how they match with your solution and long term goal.
Depending on the product maturity, even if we are talking about taking over a legacy system or designing it from scratch; once we have a clear understanding of what our clients are trying to achieve and we agree how, we elaborate a delivery plan based on our way of working and that tries to minimize the time to market.
We use the design sprint findings to identify what features we need to prioritize and what can be left for a later release. We organize the features into milestones so that our clients can easily understand the overall evolution of the product. This, of course, has to be approved by the product stakeholders based on the business needs and identified opportunities and doesn't necessarily have to happen during the workshop. We can always start discussing this later, but the sooner we have an agreement the better.
Our way of working can be seen as a result of our organisation core values: “Striving for excellence” and “Freedom to innovate”.
We love working using agile methodologies such as Scrum or Extreme Programming which are a great example to bring the immediate value of the product we have to build.
When customers approach us we are flexible and can accommodate our way of working following as much as possible the principles we are using for internal products:
1. Ideation: Inspired by brainstorming, interviews, surveys;
2. Prioritisation: Making ideas ready and deciding which to test next;
3. Develop & Test: Building and test ideas with users and stakeholders;
We encourage a user-centred design process to create solutions that users want to engage with. For that we need to deliver fast by:
1. Having the right people on board, we are balancing the right mix of skills and expertise which define a right cross-functional team.
2. Everyone in our team recognize the value of continuous deployment so we ambrace CI/CD practices and try to invest as much as possible in automation
3. We aim to be a cross-functional organisational structure, we operate in small teams and try to keep everything simple.
Communication and building relationships with our clients is crucial to our common success. Under a continuous deployment approach we agree with our client on the communication channels, tools, environment and processes we will use. We ensure that there is always an appropriate level of communication and we will heavily promote video conferencing and face to face conversation whenever possible.
1. Process: Dedicated development team and active involvement of key stakeholders - Product Owner, Project Sponsors, Scrum team.
2. Tools: Application Lifecycle Development, Quality Check, Defect Management, CI/CD
3. Environment: Shared Dev and QA environment, VPN access
4. Communication: Agile Ceremonies, Shared Confluence, Zoom, Video & Chat channels
Within the agile methodology we rely on strict definitions of standards, ready and done to front-load quality. This helps us avoid propagating requirement and backlog design defects into the development sprint; propagating in-sprint design defects into production and support; building an uncontrolled technical debt deficit.
Design and development standards must be delivered by the sprint team to adhere to standard code and design quality metric targets using quality analysis tools and sprint team peer/lead developers review.
Our human-centered approach is based on the principles of Design Thinking and is the driving force behind how we deliver products by continuously ideating, building, testing and learning.
We strive to build products that solve real-world problems for real people and have an adversity towards building solutions based on assumptions by designing user-centered solutions that users want to engage with.
We believe that the sooner we clear any assumptions that a team makes the better the chances to build a product that users love. Because of this we want to talk to the actual users that will be using the products we build and not make any guesses before we start writing a single line of code.
Prototypes can help us do that by creating an experience close to the real thing in terms of UI even though some things might not be wired up. To have a reliable outcome we must test the prototype with 5-8 people that identify with the user persona we have previously defined and agreed for which we’ll be solving this problem.
Every assumption must be clarified by a task we ask the test user to perform and a question we ask them following these tasks without overwhelming them. We take notes on what the users say, how they react and what they think about the proposed solution in order to build empathy for the people using our products.
We then review these notes with the whole team and check if the assumptions we made have been validated.
1. Is this a problem worth solving?
2. Does the user face this kind of issue?
3. Does our solution lead to the user's expected outcome?
Based on these results we can determine the next steps for our product. Do we need to add new features to the product? Maybe some features we designed are not useful or needed at this point? Should we start a new design sprint to dig deeper into our solution?
We must keep in mind that even though some features or even the whole solution are invalidated we must treat them not as failures but as positive learnings.
After we find out who our users are and what they are trying to accomplish through research we create user personas to define them.
Because users don’t just land on our website or mobile app out of thin air, they come to our product to achieve a goal, it is crucial to our human-centered approach to design our user flows with the user goal in mind. Sitemaps or wireframes on the other hand, just create the layout and structure of our product without addressing the user's needs. The latter should be a by-product of our user flows and not the other way around, as every designer should know: form follows function.
There are multiple user flows that we can use to illustrate the steps involved in achieving a goal.
You probably experienced this as a user a dozen times: you want to order a product but first you need to add your shipping address and credit card information. This is one of the most common in-app user flows.
As we previously mentioned, users don’t just land on your product out of thin air. It might be that the user saw a banner with a product on a different website, or the user received an SMS message that opens a link on the device browser or even a sign-up confirmation email. Making sure that the user experience is flawless when navigating across applications is key.
These are hard to be handled but not impossible for the user to achieve his goal if handled with care. There are situations where the user has to input a confirmation code received via SMS for example, therefore we must make sure the session doesn’t time out while he reaches for this phone or if it does we need to make sure he is able to receive another code.
User flows can come in multiple iterations: whiteboard sketches, wireframe flows and even prototypes and the key part is to always keep the user in mind. With little overhead we can incorporate this into our solution design process to address user needs while maintaining your business objectives.
Even though our development practices are revolving around the scrum methodology, our development teams like to incorporate extreme programming engineering principles into our way of delivering products. Scrum and extreme programming are two agile processes that are very aligned and compliment each other very well when it comes to delivering high quality products.
We always use git for versioning our source code when delivering software products and we rely on services such as GitHub, GitLab or BitBucket to host our projects.
We follow the git-flow branching model when using git in which the central repository holds two main branches:
The master branch is always in a release-ready state while the development branch reflects the latest delivered development changes that should be incorporated in the next production release.
Beside these two main branches there are three different types of branches that allow us to go from product requirements to production deployment:
1. feature branches
2. release branches
3. hotfix branches
These branches are used, as the name suggests, to add new features to the next release. We branch them from the develop branch and merge them back when done.
Release branches come in handy when repairing releases. Once a release branch is started from the develop branch it should reflect the desired state for the next release. On this branch we perform small adjustments for the upcoming release such as version numbers, minor bug fixes and changes and we merge it into master and back into develop when complete.
Mistakes can happen, and issues that end up in production need a way of being fixed in a manageable manner. Hot-fix branches are used to address these issues for both developers and testers and should always be branched from the master branch and then merged back into master when the fix has been verified by a tester and into develop to make sure it doesn’t end up in production again.
We encourage our people to do pair programming - code that is written on a single machine by two people sitting next to each other and looking at the same IDE, or remotely by setting up a video call and sharing one's screen.
It might look counterproductive to have two developers write only one piece of code but in the long run, applying this technique to critical software logic can be highly beneficial.
Think about it: if it’s a critical piece of logic, don’t you want other people to have a look at it and make sure everyone understands it and agrees with the solution? This usually leads to higher quality code that should reflect in cost savings due to less maintenance and refactoring on the long run.
Obviously, we shouldn’t be doing pair programming all the time, but we can’t ignore that the best collaboration between two team members is by looking both at the same screen and working towards a common goal.
Having a test-driven approach to writing code is highly beneficial because it moves the capturing of coding errors earlier in the development process rather than later and possibly even in production.
Test-driven development helps us avoid these situations but once a merge request is created and tests pass it comes to the team members responsibility to improve this tight feedback circle by having a look over the new code that was written.
Here are some of our guidelines for reviewing code:
1. The merge request should have a good commit message and have a reference to the user story, task or issue the code was written for.
2. Team members should ask for code review either by chat or by assigning a reviewer to the merge request opened.
3. Obviously, the reviewer should always be a person other than the one opening the merge request.
4. The reviewer should look for possible mistakes, make sure that the coding standards are followed and provide feedback on the style of code and solution provided.
5. Only when the feedback is addressed the merge request can be accepted.
6. When performing the merge, the commits should be squashed and the local and remove feature or fix branches should be deleted.
Testing and merging branches often allows us to catch mistakes early on. This of course cannot be done without a test suite that anyone can run on their machine and in a shared environment that allows us to integrate our changes with the code written by other team members.
We don’t want to end up with issues in production or to run into “it runs on my machine” scenarios - that’s why a test suite (unit tests, integration and end-to-end tests) that can be run both on the developer’s machine and on a shared one thanks to Jenkins, Bamboo, Circle CI or Travis allows us to deliver code with more confidence.
This of course cannot be done without a thorough test suite that can be achieved using a Test Driven Development approach. Whenever a test fails it’s a show stopper for the developer that pushed the code with issues and thanks to tools like Slack that give up real-time notifications when issues like these occur help us tackle them as soon as they happen.
Building a strong test suite that allows us to thoroughly test our source allows us to deliver with confidence and in some situations we can build pipelines that can go from a single commit all the way into production, whether it’s a mobile app or an API. This allows us to truly deliver improvements in a continuous manner.
Software testing is a process to evaluate the functionality of a software application with an intent to find whether the developed software meets the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.
Testing has always been a vital component in developing high-quality software and as the software development industry has matured it has grown in importance.
To make sure the features and fixes don’t break the current work, we use test automation and a quality assurance team that can tackle different projects at a time and quickly respond to the development pace.
Automated testing saves time, keeping project goals within budget. We use a variety of libraries and tools to write automated tests, including:
1. Selenium WebDriver or Cypress on generic Web Applications.
2. Protractor with Jasmine in AngularJS applications.
3. RestAssured with Serenity reports, JUnit in Java applications.
4. XCTest in iOS applications.
5. Appium in Android applications.
Each team decides what the best testing strategy is for the particular project under development, making sure that we have healthy test coverage throughout the project.
There are multiple reasons to automate besides saying you need to have automation to be successful using agile methodology. Manual testing takes too long and also the processes are error prone. Automated regression tests provide a safety net and give us feedback early and often. Tests also can provide documentation for stakeholders. Automation can be a good return on investment for any project.
Automated testing does not guarantee that the software meets the highest level of quality that our high-profile clients expect. It’s hard to catch details such as element alignment with automated tests. A human can identify such types of imperfections much easier than a computer. The tester can quickly scan the UI and tell if everything is ok. It would be very hard to specify an entire UI in automated tests and then maintain it for a fast-changing system.
A manual tester didn’t write that code and will most likely try to use it in a different manner than its original developers. Also, a manual tester will try to use it as a regular user, not as the individual who developed it.
We found that a combination of both automated and manual testing is needed to provide the kind of high quality software that our clients demand.
When starting working on a new project, the first steps that are done, from a testing point of view, are to analyze and evaluate the risks to the business and also understand the client needs for testing. In the early phase, it is needed to define appropriate test targets, techniques, tasks, and schedules based on the awareness of the constraints: framework, technology, process, people, risks, estimation process, approach to sizing, prioritization and scoping.
We must clarify the identification and the use of the tools appropriate for a particular situation and goal. Also, we manage the testing process as part of the development process: execute tests, file fault reports and direct them to developers and product managers. The next steps would be to measure test results and create test reports, based on the test results. Together with the other team members, the testers are also responsible for reviews and inspections of documents and code.
When going into details for each story, the testers need to estimate the testing tasks on the Sprint Backlog and actively decide on testing tasks for the sprint, contributing to the user stories. The next step is to define and create test cases from the analysis of both functional and non-functional specifications (such as reliability, efficiency, usability, maintainability and portability).
Followed closely are the steps where we interpret, execute and document test scripts, materials and regression test packs to test new and amended requirements. If an issue is found, the result will be logged in an issue tracker.
Each issue gets a priority level from urgent to low, which the development team then resolves based on time and people available. When a developer fixes an issue he informs the responsible QA engineers, who verify it. The ticket in the bug tracking system is closed when no issue is detected and no bug can be marked as fixed until it is verified.
Meanwhile, a tester actively participates in Team Meetings and takes part in the development lifecycle phases, working closely with the Product Manager, Product Owner, the developers and other testers. The main focus of all the team members is to deliver the tasks on time and with the expected quality.
Start testing as early in the development process as possible. Start with reviewing documents and test code whenever it is available.
The first testing should be around the specific changes and done on the feature branch. This lets the QA test around the specific changes and see if the requirements are met as specified and behaves as expected. Also gives the tester an early preview, for the second round of testing, which is what actually matters.
The main reason is that we do not want to 'pollute' the develop branch code before a feature is accepted. Working in a scrum team and preventing critical bugs, in that way will enhance our velocity.
A developer should create a feature branch from the develop branch and after he completes the story, requests a code review. Meanwhile, the QA deploys the feature branch to the QA environment or the local machine. After the testing has passed on the feature branch, the development team merges the feature branch into the develop branch. In the last step the QA does a smoke test and regression tests on the develop branch.
The team has years of experience in designing and architecting applications across various business domains. From these experiences, we have identified best practices and proven strategies to deliver a qualitative product.
Based on project needs and team brainstorming sessions we end up with a recommendation on architecture and technology stack.
The best practices and design principles outlined in this document allows us to deliver a system that is secure, reliable and efficient. We continually analyze and bring improvements to our architecture and integration services.
Based on project requirements (functional/non-functional) we will need to choose the most suitable architecture in order to deliver the product at a high quality level.
Every architectural decision should be kept under ADR (architectural decision record) for future reference.
According to Wikipedia:
1. An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are similar to software design patterns but have a broader scope.
We would definitely recommend layering the code in presentation, service, business and data layers when we acknowledge a project where the codebase quality and maintainability is important.
When we need to think on distributed decoupling components we can take into consideration patterns like the Broker, Event-Bus, Blackboard and so on.
The above are just simple mentions, but the process of choosing or adapting is really coupled with the desired functionality.
We take advantage of already proven tools in order to recover from infrastructure and service disruptions. We are monitoring the system for key performance indicators and respond automatically for repair when thresholds were breached. We usually recommend scaling horizontally to increase the availability and reduce the impact of single node failure.
The system data should have periodic backups so when a service disruption occurs, restoration of data from these sources is available for application. We recommend a fully managed relational or NoSQL database with very low downtime. The availability and design of the database allow the delivery of an efficient service.
In order to prevent security threats happening the latest security standards are offered by our team while delivering the business value.
We make use of traceability as we monitor, alert and create audit actions that allow for taking action.
The technical aspect must ensure that data does not fall into the wrong hands or get lost due to carelessness or technical defects. Among other things, we recommend to our clients the following technical measures to protect against unauthorized access:
1. Secure Infrastructure - encryption, secure software, security updates
2. Reliable technologies - certified data center, daily backups, proven software
3. Data avoidance - Comprehensive data control, anonymisation, no cookies, no IPs in log files, etc
4. Clear organisational structures - Differentiated access rights
Technical defects can never be completely excluded. But the risks and possible consequences can be greatly limited by a number of measures:
1. developers are kept responsible for making security decisions, including key layout and encryption granularity
2. access control with strong compartmentation: authentication, granular CRUD authorization per user/table
3. leakage prevention at rest / in use / in motion
4. authenticity and integrity of all data
5. automated tests
We usually recommend architecture that scales horizontally. This provides the elasticity to add and remove resources automatically based on the workload at the current moment in time. The optimal instance size can be selected based on the application design, usage patterns and configuration settings. Logs and metrics provide useful insights on how the system performs. When low performance occurs or the system is failing then the automatic self-healing process kicks in. This is delivered through the underlying platform and team skills and resources.
The operational quality best practices infer the ability to run and monitor the system so we know that we deliver the business value and also improve our supporting processes.
We are working on anticipating failures and understanding their potential impact over our system. We drive our improvements through lessons learned and overall experience.
We implement processes that allow us to be notified in time (fast feedback) and implement a rapid recovery procedure. We measure our operational quality by the achievement of business and customer outcomes. Our tools allow the prioritization of responses to events based on their customer and business impact. The system should be periodically analyzed for the health of the workload so that we can take the appropriate action.
Using Swift and Xcode is the most popular and versatile way of developing applications in the Apple ecosystem. In this way we can deliver performant and familiar looking experiences to the end-users while always being able to take full advantage of the hardware and software capabilities of the device. There is also a sprawling iOS community, which means libraries, documentation and know-how are always easy to find.
For our continuous integration pipeline we use the industry standard Fastlane tool, which allows us to easily automate the process of distributing apps both for local testers and for releasing in the AppStore.
Cocoapods is the dependency management system of our choice. We use a combination of well-known and time tested libraries as well as in-house ones. Our team does not shy away from new pull requests while in the meantime using forked and customized versions when necessary.
Reusing code across projects is a big productivity boost. UI components and other kinds of libraries not unique to any particular project are bundled into their own repository and integrated with Cocoapods and Carthage to be shared across the products we work on whenever possible.
We are well versed in using tools and scripts to increase our productivity: Charles, Node.js, Ruby to name a few, as well as XCode’s tools for performance testing and profiling. Any useful scripts are also shared across the team.
As far as style guides go -- and this can be a contentious topic -- we have chosen to use the default style guide provided by the XCFormat extension in combination with Swiftlint.
Any new developer joining or team or coming from another project will feel right at home in our codebase as we use well-known patterns like MVC and MVVM.
We offer enough flexibility to use the patterns fit for the job, while also keeping a coherent whole. We always strive to keep our code clean and decoupled and at the same time. Unit testing and code review are a staple in the development process, and documenting the code is always necessary. Finally, we prefer using Storyboards over writing UI in code.
Security and user privacy are of utmost importance, and we make sure to always follow best practices. Any sensitive user information is kept in the Keychain, and we use biometric authentication whenever necessary. All API requests make use of App Transport Security, and for when additional protection is needed, we use certification pinning to prevent man-in-the-middle attacks. We also make sure to redact any user identifiable information from logs to help protect user privacy.
There are many things to consider when building an Android mobile app: good user experience, high performance, security, device fragmentation and much more. To successfully fulfil all these requirements our first choice is to use the native development approach.
With Android Studio, Android SDK and Android Developer Tools we have a comprehensive set of tools for developing, testing, optimizing and running our applications in most complex scenarios possible.
We use Java which is a reputable programming language with a vast number of open source tools, libraries and documentation and Kotlin as well which is more lightweight, clean and less verbose.
We use Jenkins and a wide range of other tools for our continuous integration process that allows us to distribute our application to the testing team, run daily automated test suites or code quality checks and upload the apks to the Google Play Store.
An important goal for us is to have a consistent code base that developers can easily understand. This makes it easy to start new projects, scale the development teams and have a predictable pace of development.
We make use of design patterns like MVVM and dependency injection frameworks together with data binding, reactive programming and live data to create clean, reusable and decoupled code.
Unit testing, UI testing and code reviews are a must in all out projects and are an integral part in our goal to ensure code quality. Writing clean code is a necessary mindset and we do our best to make sure that every developer on our team understands and applies this principle.
Ensuring security and user privacy is a must and we always make sure we are following the best practices. We make use of the Android keystore system and biometric authentication to encrypt the most sensitive user information. We use SSL security protocol over an ordinary HTTP connection and properly validate the server certificate. Making use of the app permission allows us to grant access only to required resources and protect app components.
Before every release we go through a comprehensive checklist and make sure no security vulnerabilities slips into production such as: never saving private or sensitive user data on the SDCard, restricting webviews to access local data, making sure no sensitive data is sent via Broadcast, Intents or IPS mechanisms, remove logs on production builds, validate user data and much more.
Because protecting the source code is also very important we always make use of ProGuard to optimize and obfuscate code, we make sure no confidential information can be extracted from the compiled app and make sure the libraries we use have no security exploits.
Engineering is about making a product better, learning from past mistakes and creating a process which is easy for all to understand.
Web is ubiquitous. Wep apps continue to replace old (school) desktop applications. The benefits are obvious and come from the very nature of the Internet. One can tap into the functionality that was built in our apps regardless of the device used. If you have a connection, you’re ready to go - nothing to install, any OS is supported.
We deliver across web, mobile and tablet landscapes with a single codebase. Responsive and mobile optimized web apps are the `de facto` alternative to native mobile. Maybe not as fast or as powerful as those but almost readily available alongside the ‘main’ web version when that is built the right way.
Depending on the nature of the project we will be using either Angular or React . Angular is highly modular, it’s easy to build, test and maintain. On the other hand, we also work with React which has a smaller learning curve and may be more suitable for projects where you need a higher degree of flexibility in regards to the tech stack. We analyze the nature of the project and we decide which of these two fits our needs.
We develop following the mobile first approach that ensures the app first works and looks decent on the devices with the least capabilities.
When developing products, we adhere to the best practices out there. Git is always our code versioning solution of choice because we love its decentralized approach to managing code versioning.
By leveraging extreme programming practices such as code reviews and pair programming we try to make sure the code we deliver is of the top quality possible. Mistakes though can happen and we use continuous integrations tools such as Travis, Jenkins or CircleCI to make sure we don't create regressions or that our code works only on our environment.
This cannot be done without a healthy suite of unit tests and acceptance tests that can be run both on the developer's machine as well as on a shared environment provided by our chosen CI solution.
The tools we have today empower developers to build projects much faster, to create better web applications, and the choices depend again on what kind of project you have.
To build a solid foundation we make use of:
a) Transpiling / Type Checking: Babel / Typescript / Flow
b) Linting/ Hinting & Style Linter: Eslint
c) Unit Testing: Jest/Jasmine
d) Code Formatter / Beautifier: prettier / js-beautify
f) Coverage Tools: Istanbul
2. CSS Tools:
a) CSS Frameworks: Material Design / Bootstrap / Foundation / Semantic UI
b) Transpiling: Sass/SCSS
c) Linting/Hinting: CSS Lint
d) Architecting CSS: BEM / Atomic Design
Depending on what we need, we choose the best tools for starting a project and speed up the process from end to end.