Ryan Harrison My blog, portfolio and technology related ramblings

The Importance of Integration Testing Part 1 - HTTP Endpoints

Bert is a new joiner within the team. As his first task he’s been assigned to create a simple endpoint to expose some existing data onto one of the front-end screens. His manager says that this should give him a gentle introduction to the area whilst also giving opportunity to gain familiarity with the codebase and SDLC processes. She mentions that he should keep in regular contact with Brenda who will be handling the UI changes. Bert agrees, he’s already quite familiar with Spring Boot from his previous position - creating a simple new endpoint in an existing service should be straightforward he thinks to himself. It should look good if he can get this finished and signed off within a couple days.

Bert clones the repo and notices that the service doesn’t yet expose any API endpoints (Spring @RestController) so he quickly creates a new one, using the existing service pointed out to him by his teammate Ben (simplified below).

@RestController 
@RequiredArgsConstructor 
public class SomeResultEndpoint { 

  private final SomeResultService someResultService; 

  @GetMapping(value = "/someResult") 
  public SomeResult getSomeResult() { 
    return someResultService.getSomeResult(); 
  } 
}

During a quick catch-up, Ben informs Bert about the team’s core shared library, which includes various components that should automatically handle all the other key requirements for him. This includes things like authentication, object mapping and error handling. Bert starts up the service locally, pointing to the dev database, hits the endpoint and can see data returns successfully. Content that everything looks to be working ok, Bert moves on to writing tests for the change. He knows from reading the team’s SDLC documentation that the pull request build will fail if it sees any drop in code coverage.

The Bad

Generally the first thing done in these situations is the trusty unit test cases - and that’s exactly what Bert does initially. In Java, tools like JUnit and Mockito (amongst many others) make this kind of change straightforward to unit test, just mock out the service dependency and ensure the controller behaves as expected. Bert comes up with the following simple test case:

class SomeResultEndpointTest { 

  private SomeResultEndpoint someResultEndpoint; 

  private SomeResultService someResultService; 

  @BeforeEach 
  void setUp() { 
    someResultService = mock(SomeResultService.class); 
    someResultEndpoint = new SomeResultEndpoint(someResultService); 
  } 

  @Test 
  void getSomeResult() { 
    SomeResult expected = TestUtils.getSomeResultData(); 
    when(someResultService.getSomeResult()).thenReturn(expected); 
    SomeResult actual = someResultEndpoint.getSomeResult(); 
    assertThat(actual).isEqualTo(expected); 
    verify(someResultService, times(1)).getSomeResult(); 
  } 
}

Initially, Bert tried to construct some test SomeResult instances himself, but quickly realised that the data structure was complex and he was unfamiliar with what a real-world scenario would look like. Thankfully, Ben pointed him towards some existing helper methods, created alongside the original service, that looked to create some realistic data and populate most of the fields.

Bert ran the test suite, and as expected, everything passed successfully. But Bert had some issues - what about the negative scenarios? What about the HTTP status codes etc? All this was being handled by either Spring or the core shared library components. Bert created a simple negative test case, but then began to realise that there wasn’t really much more that he could add here:

@Test 
void getSomeResult_propagates_exception() { 
  when(someResultService.getSomeResult()).thenThrow(new RuntimeException("error"); 
  assertThrows(RuntimeException.class, () -> someResultEndpoint.getSomeResult()); 
}

Bert commits his change, pushes it and creates a pull request for the team to review. The build passes successfully and code coverage is 100% on Bert’s changes - great! The pull request gets approved, merged and deployed into the dev/qa environments. Bert pings Brenda (who is doing the UI changes) that the change is ready to begin integrating with.

The Ugly

The next day Bert gets a message from Brenda - “the service isn’t working properly”, she explains. “Every time I call it in QA I get a 500 error returned”. Bert quickly pulls up the logs and notices many exceptions being thrown from the endpoint - all of which seem to be related to Jackson when the response SomeResult is converted to JSON.

com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError) (through reference chain: java.util.concurrent.ConcurrentHashMap[" "]->com.java.sample.OraganisationStructures$$EnhancerBySpringCGLIB$$99c7d84b["$$beanFactory"]->org.springframework.beans.factory.support.DefaultListableBeanFactory["forwardRef"]at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:706) 
at at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) 
at at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:704) 
at at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:690) 
at at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155) 

A quick search of the error indicated a circular reference issue when trying to serialize the SomeResult instance. Sure enough, the data structure contained self references. Ben explained that it was actually a linked-list type structure, useful for the existing processing but perhaps not ideal for an endpoint representation. “Why didn’t this issue come up when you were testing?”, asked Ben. Some investigation later, Bert found that the dev database had old data, preventing such instances from ever being created when he ran it locally. The test utility methods did create this scenario, but the tests themselves had failed to pick up the issue. Ben suggests, “I guess we have never tried converting that model to JSON before”. Bert quickly resolved the issue, pushed his changes and informed Brenda about the fix - noting that he didn’t have to change any of the tests as part of his change.

The next day Bert gets another message from Brenda - “I’m now seeing some strange behaviour when unauthorized users call the endpoint. I expect the response to conform to our standard error model, but I just get a wall of text and a 500 error instead of the 401 status code I expect”. Again Bert checks the logs, he sees new AuthorizationException stack traces coming from the shared library component which performs the authorization checks. This looks like expected behaviour, Bert ponders, but why doesn’t the response get mapped correctly? Ben points him towards the AuthExceptionMapper class in the shared library, which converts the exception to the common error model. After some debugging, Bert found that the mapper had not been correctly configured in the service. Ben explained, “that mapper is still quite new, I guess it never got added since that service hasn’t exposed any endpoints before”. Again, Bert quickly fixes the issue, pushes his changes and informs Brenda - again noting that he did not have to change any of his test cases as part of the fix.

The Good

After these fixes, the new endpoint works as expected and Brenda is able to integrate successfully, but Bert is quite rightly less than satisfied. Not only did an otherwise straightforward change take much longer than it should, even now he could not be confident that it works as expected in all scenarios, let alone 6 months down the line. Bert brings up the issue in the sprint retrospective, highlighting a number of areas that are not covered by the current test suites - even though the code coverage metrics might suggest they are:

  • JSON object marshalling - both in request and response bodies
  • URL formatting and HTTP request methods
  • usage of query and/or path parameters
  • any exception scenario requiring use of separate exception mappers or handlers (response format and HTTP status codes)
  • any additional functionality provided in controller advice, filters, converters etc.

The team agrees, clearly there is a need for automated integration tests for these endpoints in addition to the conventional unit tests. Thankfully, Spring comes with a number of built-in solutions - one being MockMvc, which the team end-up using. A typical problem for integration tests like this is the need to spin-up a full Spring container - for most apps that might mean queue listeners get created, calling out to other services during startup etc - not something that is ideal for a short-lived test suite. Bert suggests configuring the app in such a way that allows for a “test” instance to be started (without all the external dependencies etc.) - to make creation of a proper integration test suite much easier. But in the meantime, MockMvc has a nice way around it:

Using this annotation will disable full auto-configuration and instead apply only configuration relevant to MVC tests (i.e. @Controller, @ControllerAdvice, @JsonComponent, Converter/GenericConverter, Filter, WebMvcConfigurer and HandlerMethodArgumentResolver beans but not @Component, @Service or @Repository beans).

Bert explains that this in effect gives you a a cut-down Spring container which just creates the components required to support MVC/REST functionality. No need for the rest of your beans to be created, those can be mocked out as usual. Bert comes up with the following additional test cases for the original change:

@WebMvcTest(SomeResultEndpoint.class) 
class SomeResultEndpointIntTest { 

  @Autowired 
  private MockMvc mockMvc; 

  @MockBean 
  private SomeResultService someResultService; 

  @MockBean 
  private AuthorizationService authorizationService; 

  @Test 
  void getSomeResult_succeeds() throws Exception { 
    when(authorizationService.isAuthorized(anyString())).thenReturn(true); 
    SomeResult expected = TestUtils.getSomeResultData();  
    when(someResultService.getSomeResult()).thenReturn(expected); 

    this.mockMvc.perform(get("/someResult")) 
      .andExpect(status().isOk()) 
      .andExpect(content().string(equalTo(marshalObject(expected)))); 
  } 

  @Test 
  void getSomeResult_notfound() throws Exception { 
    when(authorizationService.isAuthorized(anyString())).thenReturn(true); 
    when(someResultService.getSomeResult()).thenReturn(null); 

    mockMvc.perform(get("/someResult")) 
      .andExpect(status().isNotFound()); 
  } 

  @Test 
  void getSomeResult_unauthorized() throws Exception { 
    when(authorizationService.isAuthorized(anyString())).thenReturn(false); 
    SomeResult expected = TestUtils.getSomeResultData();  
    when(someResultService.getSomeResult()).thenReturn(expected); 

    mockMvc.perform(get("/someResult")) 
      .andExpect(status().isUnauthorized()); 
  } 
}

The above are three very simple test cases, but crucially provide coverage in a number of key areas:

  • we have the correct URL and are able to respond to GET requests over HTTP
  • the response can be successfully serialized to the required response format and matches the expected output (JSON in this case, but it could be anything)
  • exceptions are handled as expected, the HTTP response status codes are correct

Bert highlights that Spring has a number of other methods of handling the above - @SpringBootTest if you want to really startup the full application (combine with @ActiveProfiles) alongside utils like TestRestTemplate, but the team agrees that even just the above is a vast improvement.

Takeaways (TL;DR)

The example above is somewhat contrived, but really these scenarios are not unrealistic at all. There can easily be large areas of your application that your tests don’t actually cover - likely parts that are deeply reliant on ‘magic’ from your framework of choice and/or rely on (at least part of) your application to be running in order to test. How does your code integrate with the framework or other libraries? Is your configuration correct? You need something more than unit tests for this kind of thing and you want to know about such issues as early as possible.

  • Unit tests are great, but not enough on their own
  • Code coverage metrics will lie to you
  • Making any change/fix that doesn’t require you to also modify a test is a red flag
  • We rely more and more on the ‘magic from the framework’, but how do we test it? How do we know we’ve configured it correctly?
  • Above the core business logic (unit cases), every HTTP endpoint should have automated tests covering URL descriptors, methods, object marshalling, exception handling etc.
  • Spring MockMvc is a neat way of producing integration tests for your endpoints, but is not the only solution, see
Read More

Automatically Update Dependencies with GitHub Dependabot

With the introduction of Actions, GitHub is quickly becoming the one-stop shop for all things CI. But one, perhaps less well-known feature, is dependabot which allows you to automatically keep all your dependencies up to date. Depending on which language/framework you are using, making sure all your libraries are on the latest versions can be tricky. Maven/Gradle have plugins which will notify you of new versions, but this is a decidedly manual process. Plus if you are unlucky enough to develop in JS land, then good luck attempting to keep your 400 npm dependencies updated at any reasonable recurrence.

Instead, GitHub Dependabot will automatically read your project build files (build.gradle, package.json, requirements.txt etc) and create new pull requests to update libraries to newer versions. If you have a GitHub Action configured to run your build on all PR’s, then you can also gain some reasonable level of confidence that such newer versions won’t break your project (of course depending on quality of your tests).

Updating Java Dependencies

Configuring Dependabot is as simple as adding a dependendabot.yml file in the .github directory at the root level of your project (the same place any action workflow config files are also placed).

version: 2
updates:
    # Enable version updates for Gradle
    - package-ecosystem: "gradle"
      # Look for `build.gradle` in the `root` directory
      directory: "/"
      # Check for updates once daily
      schedule:
          interval: "daily"

The above example sets up the most simple of use cases, which will use the gradle package manager to search for a build.gradle file in the / directory of your project and attempt to update any libraries on a daily basis.

When a new library version is released, a new Pull Request will be opened on your project - in the below example for Kotlin Ktor:

Dependabot Pull Request

The great thing about Dependabot though is that these pull requests aren’t just notifications - it’s clever enough to actually modify the build files (build.gradle in this case to the newer version):

Dependabot Changes

If all looks good (and hopefully your build still passes), all you have to do is merge the PR and you are good to go.

Updating Python Dependencies

The config for Python is very much the same (and for a variety of other languages). In this case, the scheduled interval is set to weekly instead of daily to avoid too much PR noise for fast moving dependencies. It is also set to ignore all updates to flask libraries - useful if you are required to fix at an older level for some reason:

- package-ecosystem: "pip"
  # Look for `build.gradle` in the `root` directory
  directory: "/"
  # Check for updates once weekly
  schedule:
      interval: "weekly"
      ignore:
          # Ignore updates to packages that start 'aws'
          # Wildcards match zero or more arbitrary characters
          - dependency-name: "flask*"

Make sure that you have your build workflow configured to run on all pull requests to the master branch in order to run your build automatically. See this previous post on how to setup build actions.

name: Build

on:
    push:
        branches: [master]
    pull_request:
        branches: [master]

You will then see the standard “All checks have passed” message on the PR if your build still passes on the newer dependency version. Make sure that you have adequate tests before you hit merge without trying it yourself though - ideally decent integration tests which actually start up your application. Unit tests are generally not enough to verify this kind of thing.

Dependabot Build

Additional Commands

You can interact with Dependabot by leaving comments on the pull requests that it opens - if for example you want to rebase against newer changes you’ve committed since it was run, or if you want to ignore a certain major/minor release version:

  • @dependabot rebase will rebase this PR
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

More Info

GitHub Dependabot Docs

Above Examples

dependabot.yml

Read More

Kotlin & Java CI with Github Actions

If you have a Kotlin/Java project of any reasonable size, you probably want some kind of CI (Continuous Integration) process running alongside the main development workflow. Commonly this takes the form of something like:

  • running a small build on every pull request before merging
  • running a complete build on every change pushed to master (or any other branch) - including integration tests etc
  • automatically run deployment steps e.g to Heroku, AWS or Github Pages
  • ensure that your project builds and runs on a wide variety of devices e.g different JDK versions/OS’ - or really that it can build on a machine that isn’t your local box
  • in general your main branch contains a fully working version of your project
  • run static code analysis tools or linters
  • anything else that can be automated..

Previously, the most widespread tool for this is is probably TravisCI (which is free for open source usage). Now however, there is an alternative that’s built into Github itself - Github Actions. You can think of it as pretty much the same as other CI tools out there, but you get the added benefit of full integration with Github, so now everything can be in the same place!

Creating a a Gradle Build Action

Your repository should have a new tab called Actions which is your new portal for anything CI related. Once you click on the tab you will be able to create your first Action. By default, Github will suggest some common workflows relevant to your project (e.g if it’s a Node project run npm run build and npm test). These take the form of open source packages hosted within other repositories, but you can of course create your own custom actions taking the best bits from each.

Github Actions tab

Actions take the form of simple .yml files which describes the workflow and steps to execute. In our case, we want to build and test our Kotlin or Java project. This example will use Gradle, but Maven will also work just as well. The below configuration is all we need to build our repo:

name: Build

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 11
      uses: actions/setup-java@v1
      with:
        java-version: 11
    - name: Grant execute permission for gradlew
      run: chmod +x gradlew
    - name: Build with Gradle
      run: ./gradlew build

Thankfully the YAML markup is pretty readable. In the above action we perform the following steps:

  • Instruct Github to execute this Action on any push to the master branch, or pull requests targeting master
  • Create a single job called build (you can have as many as you want within a single Action) which runs on an Ubuntu container. There are plenty of other options for which OS image you want to target (runs-on: windows-latest or runs-on: macos-latest). This is great to make sure your project will build and run on a range of different machines.
  • Perform a Git checkout of your repo in the new virtual environment. This step makes use of the uses statement which allows you to reference other packaged actions - in this case actions/checkout. This is where things start to get a lot more powerful as you can begin to publish and reuse workflows from the community
  • Setup a JDK using another action provided by Github. In this case we just use JDK 11, but you could run these steps with a range e.g 8 to 14 to ensure compatibility
  • Run a simple shell script to give permissions on the Gradle wrapper. Similarly you could run pretty much any shell scripts you need
  • Execute the Gradle wrapper script to perform a complete build and test of our project. Note that this is exactly what we would do if we were to do the same locally - nothing needs to change just because we need to run this in a CI environment.

That’s it to run a simple Gradle build for our Kotlin or Java project. Github will instruct you to commit the .yml file into the .gitub/workflows directory in the root of your repo so that it can be picked up properly.

Github Actions sample file

Running the CI Workflow

Because we just set up our Action to be run automatically on any PR or push to master, there is nothing else we need to do to start utilising our new continuous integration process. In the Actions tab you will see all builds of your project alongside all log output. You will get notified in the event that your build process fails by email.

Github Actions output

Caching Build Dependencies

If you run the above Action you will probably notice that it takes some time to execute. This is because it has to go out and download all of your JAR dependencies every time it runs. To speed this up, you can use a caching mechanism. After your workflow is executed successfully, the local Gradle package cache will be stored in Github to allow it to be restored on other subsequent runs.

steps:
  - uses: actions/checkout@v2
  - name: Set up JDK 1.8
    uses: actions/setup-java@v1
    with:
      java-version: 1.8
  - name: Cache Gradle packages
    uses: actions/cache@v1
    with:
      path: ~/.gradle/caches
      key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle') }}
      restore-keys: ${{ runner.os }}-gradle
  - name: Build with Gradle
    run: ./gradlew build

More information

This just touches the surface of what you can do with Github Actions (it is a CI solution after all), focusing specifically on Kotlin or Java projects using Gradle. There are of course an ever increasing number of other supported languages/tools being added (Node, Python, Go, .NET, Ruby), alongside a number of other nice use cases integrating into other aspects of Github:

  • Create Github releases automatically after successful builds
  • Mark issues and pull requests as stale if not updated recently
  • Automatically label new pull requests based upon predefined criteria
  • Run within Docker containers, Kubernates and AWS uploads
  • Static analysis and linting
  • Automatically publish build artifacts to Github Pages

See the below links for more info and how to find some of the more popular packages created by the community. There is probably already something covering your use case:

Read More

Angular - Proxy API Requests

If you are developing with Angular locally, then chances are you also have some kind of API server also running on the same machine that you need to make requests to. The problem is, your local environment setup may not reflect that of a real-world deployment - where you might use something like Nginx as a reverse proxy. CORS (Cross-Origin-Resource-Sharing) policy starts to become a problem when you have something like:

  • Angular dev server on localhost:4200
  • Some kind of HTTP API listening on localhost:8080

If you try to make a request from your Angular app to localhost:8080, your browser will block you as it’s effectively trying to access a separate host. You could enable CORS on your server to explicitly enable access from different origins - but this is not something you want to turn on just to get a working dev environment.

A much better option is to use the built-in proxying support of the Angular dev server (webpack) to proxy certain URL patterns to your backend server - essentially making your browser think that they are being served from the same origin.

Create a Proxy config file

To get this setup, simply create a config file called proxy.conf.json in the root of your Angular project (the name doesn’t matter, but is just a convention). The most basic example is:

{
  "/api": {
    "target": "http://localhost:8080",
    "secure": false
  }
}

In this case, all requests to http://localhost:4200/api will be forwarded to http://localhost:8080/api where you API is able to handle them and pass back the responses.

More options are available in this config file, see here for the docs from webpack.

Point Angular to the proxy config

Next we need to point Angular to the newly created proxy config file to make sure webpack picks it up when the dev server is started (via ng serve).

In the main angular.json file, add the proxyConfig option to the serve target, pointing to your config file:

"architect": {
  "serve": {
    "builder": "@angular-devkit/build-angular:dev-server",
    "options": {
      "browserTarget": "your-application-name:build",
      "proxyConfig": "proxy.conf.json"
    },

When you restart the dev server, you should start seeing the proxy take effect and requests being passed through to your API server accordingly.

Rewriting the URL paths

A very common use case when running proxies is to rewrite the URL paths - the pathRewrite option can be used in this scenario. For example, in the below config all requests to http://localhost:4200/api will be proxied straight to http://localhost:8080 (note the absence of the /api path).

{
  "/api": {
    "target": "http://localhost:8080",
    "secure": false,
    "pathRewrite": {
      "^/api": ""
    }
  }
}

More complex configuration

More complicated configuration use cases can be achieved by a creating a proxy JS config file proxy.conf.js instead of JSON (make sure to update the proxyConfig path if you do). The below example shows how to proxy multiple entries to the same target path:

const PROXY_CONFIG = [
    {
        context: [
            "/all",
            "/these",
            "/endpoints",
            "/go",
            "/to",
            "/proxy"
        ],
        target: "http://localhost:8080",
        secure: false
    }
]

module.exports = PROXY_CONFIG;

Because this config file is now a standard JS file, if you need to bypass the proxy, or dynamically change the request before it’s sent, you can perform whatever processing you need in the JS config blocks:

const PROXY_CONFIG = {
    "/api/proxy": {
        "target": "http://localhost:8080",
        "secure": false,
        "bypass": function (req, res, proxyOptions) {
            if (req.headers.accept.indexOf("html") !== -1) {
                console.log("Skipping proxy for browser request.");
                return "/index.html";
            }
            req.headers["X-Custom-Header"] = "yes";
        }
    }
}

module.exports = PROXY_CONFIG;
Read More

Scroll to top button with no jQuery

Dynamic scroll to top buttons have become quite common amongst a lot of webpages now, but most guides online require the use of jQuery to achieve the functionality of smooth scrolling plus fade in/out. In modern browsers however, you can get much the same effect without the additional ~30kb+ library overhead if you are already using a separate framework.

Create the button

First step is to create an element representing the actual button. This takes the form of a very simple div element (upon which dynamic styles will be attached) and a nested img pointing to whatever arrow etc you need. The button element can be as complex as you need as long as you wrap in a single div like below. The scroll to top button for this site is a simple 45x45px arrow image which works well.

<div id="topcontrol" title="Scroll to Top">
    <img src="/images/arrow.png">
</div>

Add styling

Without any styling, the image above will just appear at the bottom of your page. We need to add some CSS to ensure that the button always appears in the same position on the bottom right hand corner of the screen regardless of the current scroll position:

#topcontrol {
  @media (max-width: 38rem) {
      display: none;
  }
  
  position: fixed;
  bottom: 10px;
  right: 20px;
  opacity: 0;
  cursor: pointer;
}

The above SCSS selector (which can be translated to standard CSS as well), positions the element in a fixed position on the bottom right corner of the screen, sets the opacity to zero (to hide by default) and ensures that your cursor becomes a pointer when hovering over the button as you would expect.

JavaScript Handler

Finally, to get the desired behaviour when the button is clicked, a small JavaScript segment is needed. The below snippet uses the scrollTo function on window to scroll the page to the top whenever the button is clicked. The new smooth behaviour controls the animated effect.

Because the button is hidden by default due to opacity: 0 above, we also need to add an event handler to be called whenever the page is scrolled. If the current position is above a default threshold (100 in this case), the scroll to top button becomes visible and vice versa.

<script>
    (function(document) {
        const topbutton = document.getElementById("topcontrol");
        topbutton.onclick = function(e) {
            window.scrollTo({top: 0, behavior: "smooth"});
        }

        window.onscroll = function() {
            if (document.body.scrollTop > 100 || document.documentElement.scrollTop > 100) {
                topbutton.style.opacity = "1";
            } else {
                topbutton.style.opacity = "0";
            }
        };
})(document);
</script>

Fade in/out

The above code will get all the behaviour we need, but the button will jump in and out of the page depending on the page position. To make it a little less jarring, some fade in/out can be added in. This is very similar to the el.fadeIn() methods you can find in jQuery. Because we are controlling the visibility solely based on opacity, we can make use of CSS transitions to animate the change across a number of milliseconds. Adding the below to the CSS selector above is a simple way to replicate the effect:

-webkit-transition: opacity 400ms ease-in-out;
-moz-transition: opacity 400ms ease-in-out;
-o-transition: opacity 400ms ease-in-out;
transition: opacity 400ms ease-in-out;
Read More