Sunday, September 29, 2013

@ECHO OFF/ON Command

@ECHO OFF/ON Command
The ECHO OFF/ON Command.
This command will either turn ON, or OFF the command you put in a batch file from showing itself.
http://www.instructables.com/id/Slightly-More-Advanced-Basic-Batch/step2/ECHO-OFFON-Command/

Eclipse Memory Analyzer (MAT)

Memory Analyzer (MAT)
The Eclipse Memory Analyzer is a fast and feature-rich Java heap analyzer that helps you find memory leaks and reduce memory consumption.
Use the Memory Analyzer to analyze productive heap dumps with hundreds of millions of objects, quickly calculate the retained sizes of objects, see who is preventing the Garbage Collector from collecting objects, run a report to automatically extract leak suspects
http://www.eclipse.org/mat/

build pipeline

Builds were typically done straight from the developer’s IDE and manually deployed to one of our app servers.
We had a manual process in place, where the developer would do the following steps.
Check all project code into Subversion and tag
Build the application.
Archive the application binary to a network drive
Deploy to production
Update our deployment wiki with the date and version number of the app that was just deployed

when we needed to either rollback to the previous version, or branch from the tag to do a bugfix


http://java.dzone.com/articles/creating-build-pipeline-using

Saturday, September 28, 2013

Code Review Tools

  • Before introducing code reviews in your project, agree on a procedure for how to do it, and make sure that everyone agrees with it and understands it. I can suggest the following procedure points, just to get started:



  •     Plan for a maximum of four hours development time pr.task

        Keep the master branch as the production ready branch at all times

        Always branch out a new task from master or release

        Use a standard for naming your task branches (more on this below)

        Use task branches for every change (no exception!)

        When done coding, push to Github and do pull request to development branch on every task branch

        Do a code review of your own pull request before sending it

        Never merge to master until you get at least one "Ok" comment from others

        Check others pull requests at least every two hours



    http://share.ez.no/blogs/arne-bakkebo/branching-and-code-review-procedures-qa-4

  • Review Assistant tool includes the custom check-in policy for TFS

  • Team Foundation Server provides a number of out-of-box check-in policies, including policies that check that static code analysis has been performed

    https://www.devart.com/review-assistant/docs/index.html?adding_code_review_policy_to_tfs_project.html

  • setup a custom policy to TFS that every check-in needs to pass a code review

  • So, instead of developers having to shelve their changes manually and assign the shelveset to a code review, if this tool can do that automatically for you in a check-in process, that’d be super

    http://blog.devart.com/creating-tfs-custom-check-in-policy-part-1.html

  • FishEye

FishEye knows everything about your code: search out source code artifacts, integrate with JIRA, and browse commits, files, revisions, or people.
fishEye provides a read-only window into your Subversion, Perforce, CVS, Git, and Mercurial repositories, all in one place.
Keep a pulse on everything about your code:
Visualize and report on activity, integrate source with JIRA issues, and search for commits, files, revisions, or people.
http://www.atlassian.com/software/fisheye/overview

  • stash
On-premise source code management for Git that's secure, fast, and enterprise grade. Create and manage repositories, integrate with JIRA for end to end traceability, set up fine-grained permissions, collaborate on code and instantly scale with high performance.
https://www.atlassian.com/software/stash
  • Gerrit

Gerrit is a free, web-based team software code review tool.
Software developers in a team can review each other's modifications on their source code using a Web browser and approve or reject those changes.
It integrates closely with git, a distributed version control system.
https://en.wikipedia.org/wiki/Gerrit_(software)

Gerrit
Gerrit is a web based code review system, facilitating online code reviews for projects using the Git version control system.
https://code.google.com/p/gerrit/


  • A code review process is a process in which a change by a developer is reviewed by other developers.

Every developer can suggest changes and update the suggested changes.
Once the change is accepted by all relevant parties, it is applied to the code base.
While a code review process can be implemented without any tool support, it is typically more efficient if a structured code review system is used.
Gerrit is a code review system developed for the Git version control system

Advantages of code review

In general a structure code review process has the following advantages:

    Early error detection: errors are early identified in the process

    Conformity with coding standards: code review allows the team to identify early in the process any violations with the code standards of the team

    Knowledge exchange: the code review process allows new developers to see the code of other developers and to get early feedback on their suggested changes

    Shared code ownership: by reviewing code of other developers the whole team gets a solid knowledge of the complete code basis

    Code reviews in open tools provide people without the permission to push to a repository with a simple way to contribute their suggested changes and to get feedback.

http://www.vogella.com/articles/Gerrit/article.html


  • Before you can commit the files to your repository, you need to add them. Simply right click the shared project’s node and navigate to Team => Add


After this operation, the question mark should change to a plus symbol. To set certain folders or files to be ignored by Git, e.g. the bin folder, right click them and select Navigate => Ignore.


The ignored items will be stored in a file called gitignore, which you should add to the repository
After changing files in your project, a “>” sign will appear right after the icon, telling you the status of these files is dirty
Any parent folder of this file will be marked as dirty as well.


http://eclipsesource.com/blogs/tutorials/egit-tutorial/


  • Git blame

git status git tag git blame
The git blame command is a versatile troubleshooting utility that has extensive usage options. The high-level function of git blame is the display of author metadata attached to specific committed lines in a file. This is used to examine specific points of a file's history and get context as to who the last author was that modified the line. This is used to explore the history of specific code and answer questions about what, how, and why the code was added to a repository.

Git Blame vs Git Log
While git blame displays the last author that modified a line, often times you will want to know when a line was originally added. This can be cumbersome to achieve using git blame. It requires a combination of the -w, -C, and -M options. It can be far more convenient to use the git log command.

The git blame command is used to examine the contents of a file line by line and see when each line was last modified and who the author of the modifications was
Online Git hosting solutions like Bitbucket offer blame views, which offer a superior user experience to command line git blame usage. git blame and git log can be used in combination to help discover the history of a file's contents.
https://www.atlassian.com/git/tutorials/inspecting-a-repository/git-blame

Build Tools

  • Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
 https://maven.apache.org/


  • Ant

Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications
http://ant.apache.org/
  • Gradle
Gradle is build automation evolved. Gradle can automate the building, testing, publishing, deployment and more of software packages or other types of projects such as generated static websites, generated documentation or indeed anything else.
http://www.gradle.org/

  • gulp is a toolkit for automating painful or time-consuming tasks in your development workflow, so you can stop messing around and build something.

https://gulpjs.com/


  • SBT

The interactive build tool
Define your tasks in Scala. Run them in parallel from sbt's interactive shell. 
https://www.scala-sbt.org/index.html
  • Grunt
The Grunt ecosystem is huge and it's growing every day. With literally hundreds of plugins to choose from, you can use Grunt to automate just about anything with a minimum of effort. If someone hasn't already built what you need, authoring and publishing your own Grunt plugin to npm is a breeze.
Why use a task runner?
In one word: automation. The less work you have to do when performing repetitive tasks like minification, compilation, unit testing, linting, etc, the easier your job becomes. After you've configured it through a Gruntfile, a task runner can do most of that mundane work for you—and your team—with basically zero effort.
http://gruntjs.com/
  • Apache Ivy 

The agile dependency manager
Apache Ivy is a popular dependency manager focusing on flexibility and simplicity
https://ant.apache.org/ivy/

  • Java jar dependency management using Ivy


Along with our source code, we’ve also been putting all required jars into cvs, basically for two reasons:

    The build process controlled by cruisecontrol needs the jars, but cruisecontrol runs on a server where we don’t want to install an IDE (jdeveloper). We’ve put all adf, bc4j and other required jars into cvs, and when the build process compiles the application, it first gets all the required source code and the required jars, so no need for jdeveloper.
    We want version control on dependencies. If you check out the code for release 1, you need the jars that release 1 depends on. If you check out release 2, you will need different jars.

So just relying on the project dependencies in jdeveloper is not an option.

The downside to this approach however, is that we get a lot of jars in cvs, and many projects use the same jar files, so it’s bit of a waste. For some time, i thought the solution would be to move from using ant to using maven for building the code. However, some weeks ago, i discovered ivy. Ivy is a java dependency manager, which you can easily use in existing projects using ant.

http://www.andrejkoelewijn.com/blog/2005/07/14/java-jar-dependency-management-using-ivy/
  • PHing

PHing Is Not GNU make
it's a PHP project build system or build tool based on Apache Ant.
You can do anything with it that you could do with a traditional build system like GNU make, and its use of simple XML build files and extensible PHP "task" classes make it an easy-to-use and highly flexible build framework.
http://www.phing.info/


  • Gant

Build software with Gant - IBM
Gant is a highly versatile build framework that leverages both Groovy and Apache Ant to let you implement programmatic logic while using all of
www.ibm.com/developerworks/java/tutorials/j-gant/


  • LuntBuild
Luntbuild is a powerful build automation and management tool.
Continuous Integration or nightly build can be easily set using a clean web interface
luntbuild.javaforge.com


  • Buildbot Basics
Buildbot is an open-source framework for automating software build, test, and release processes.

https://buildbot.net/

  • CMake is a cross-platform open-source meta-build system which can build, test and package software. It can be used to support multiple native build environments including make, Apple’s xcode and Microsoft Visual Studio.

https://github.com/ttroy50/cmake-examples


  • CMake is an open-source, cross-platform family of tools designed to build, test and package software. CMake is used to control the software compilation process using simple platform and compiler independent configuration files, and generate native makefiles and workspaces that can be used in the compiler environment of your choice. The suite of CMake tools were created by Kitware in response to the need for a powerful, cross-platform build environment for open-source projects such as ITK and VTK

https://cmake.org/


  • In a short conclusion, CMake help you to manage and build your source codes effectively. If you have some troubles with gcc and Makefile, just move out to CMake.

https://tuannguyen68.gitbooks.io/learning-cmake-a-beginner-s-guide/content/chap1/chap1.html


  • First, it is almost unreadable. Second, it does not account for any header file dependencies. In fact, if any of the header files change, there’s no way for the system to detect this at all, forcing the user to type in make clean, then make to make everything all over again. Third, given the sheer number of files, make will take minutes to run, which kills productivity. Fourth, if you look closely at the path, you will observe that the person writing this Makefile used a Mac with MacPorts and therefore is the kind of person who is just wrong. Fifth, this Makefile obviously won’t work for someone on Linux or Windows, even worse, it won’t work for another Mac user who used HomeBrew or compiled OpenCV from source. Sixth, some rules are just commented out, which indicates sloppy behaviour. Writing Makefiles like this just sucks. It sucks productivity and makes me think of slow and painful death.

I’ll now show how I replaced this terrible Makefile with a clean, modular CMake build system that replicated the effects of the Makefile
the ease of setting up this CMake system will convince you to never write a Makefile again.
As these files are generated by a tool and not written by a person, you never have to read them, so Makefile readability is not a concern. Second, the Makefiles encode all the dependencies correctly, so there’s no reason to go through the hoops creating complex dependency files.

But what if I’m on Windows, or if I just am the kind of person who is always wrong and uses CodeBlocks, Xcode, or another IDE? you ask. Well, hold on till the end, I’ll show you how CMake has you covered.
CMake has a concept of generators’, which is just a fancy name for a backend.

https://skandhurkat.com/post/intro-to-cmake/



  • I love CMake, it allows me to write cross-platform code and be confident that the build system would work across a choice of compilers, IDEs, and operating systems. 

For example, I could simply write a function that mimics a unit test and prints out either “Test passed” or “Test failed” depending on the result of the test. All I now need is a way to automatically run these tests.
when C++ with CMake offers cross platform builds, testing infrastructure, and parallel execution with memory consistency models?
https://skandhurkat.com/post/intro-to-ctest/


  • SCons is an Open Source software construction tool—that is, a next-generation build tool. Think of SCons as an improved, cross-platform substitute for the classic Make utility with integrated functionality similar to autoconf/automake and compiler caches such as ccache. In short, SCons is an easier, more reliable and faster way to build software.

https://scons.org/

  • Yarn is a package manager for your code. It allows you to use and share code with other developers from around the world. Yarn does this quickly, securely, and reliably so you don’t ever have to worry.
  • https://yarnpkg.com/en/

maven repository management

  • Nexus

Sonatype Nexus sets the standard for repository management providing development teams with the ability to proxy remote repositories and share software artifacts.
Download Nexus and gain control over open source consumption and internal collaboration.
http://www.sonatype.org/nexus/


  • Artifactory

Artifactory offers powerful enterprise features and fine-grained permission control behind a sleek and easy-to-use UI.
Proxy
Artifactory acts as a proxy between your build tool (Maven, Ant, Ivy, Gradle etc.) and the outside world.
Local Cache
It caches remote artifacts so that you don’t have to download them over and over again
Control
It blocks unwanted (and sometimes security-sensitive) external requests for internal artifacts and controls how and where artifacts are deployed, and by whom.
http://www.jfrog.com/home/v_artifactory_opensource_overview


  • Empower Hudson with Artifactory - Track and Replay Your Build Artifacts

Using one of the different flavors of version control applications, you can easily reproduce the state of any point in the past using the different methods of SCM tagging.

what happens when you want to reproduce binary products from a certain phase?
Are dependencies considered?
Does anyone really remember what version of dependency X was used in version 1.0 or in version 3.1 of your application?
What if you used version ranges or dynamic properties?
Was the application compiled using JDK 5 or 6?
All this information can be recorded during the publication of your binaries, which is usually done by a CI server of your choice.

Your CI server has all the knowledge required in order to reproduce a build:
    Information on the builds themselves
    The published items
    Version information
    Dependencies
    Build environment details

Using Hudson (and others to be supported soon) and Artifactory we've:

    Supplied Hudson with all the needed dependencies from Artifactory—helping us keep the exact dependencies that were used in each build
    Deployed all produced binaries to Artifactory—helping us keep and promote all the products of the build
    Published build information to Artifactory—helping us manage and keep track of every build, environment, product, and dependency


With the assistance of these tools and methods, you will be able to reproduce and execute a build from any point of recorded time or compare information between different builds.


http://java.dzone.com/articles/empower-hudson-artifactory
  • Archiva

Apache Archiva™ is an extensible repository management software that helps taking care of your own personal or enterprise-wide build artifact repository. It is the perfect companion for build tools such as Maven, Continuum, and ANT.
http://archiva.apache.org/

  • In a continuous integration environment, where builds are often triggered by checking in artifacts, there is the potential for a large number of builds to be executed. Each of these builds, at least the successful ones, results in some artifacts being published into the repository. These can start consuming a lot of space, and it is important to manage them.

Archiva provides two different options for automatically cleaning up old snapshots on a per-repository basis:

    Repository Purge by Number of Days Older

    Repository Purge by Retention Count

Both of these options can be viewed and changed by clicking Repositories under the Administration menu and then clicking Edit for the repository you are interested in.

Repository Purge by Number of Days Older

Archiva automatically deletes snapshots older than the specified number of days. Archiva always retains the most recent snapshot, no matter how old it is.

Repository Purge By Retention Count

To use this method, you must set the purge-by-days-older value to 0. Archiva retains only the most recent snapshot instances up to this value. Older instances that exceed this count are deleted.
http://docs.oracle.com/middleware/1212/core/MAVEN/populating_archiva.htm

  • Maven Repository Manager, Archiva in this case, includes the following:

Internal: This repository is used to store finished artifacts that you have built in your development environment.

Snapshot: This repository is used to store work in progress artifacts that you have built in your development environment.

Mirror: This repository is used to store dependencies that have been downloaded from an external repository.

Dev, test, qa, prod: You have one repository for storing the dependencies needed for each target environment. You do this because it is possible that two environments might have the same version of an artifact (for example, 12.1.2-0-0) even though the artifact has been patched in one environment, and is therefore different.
http://docs.oracle.com/middleware/1212/core/MAVEN/intro_ref_ede.htm

  • We use Maven 2 to resolve dependencies and build the source code into packages. 

With the Archiva repository manager we are able to store the needed libraries and to keep track of the daily builds or releases.
Hudson will be the tool to start up the build process every day and notify the developers when the build fails.
By default you'll have 2 Managed repositories. An internal one you can use as a proxy
for the company, and a snapshots repository to put your snapshot builds
Besides those there are
also remote repositories. These are other Maven repositories where Archiva will look for
dependencies when they aren't in your own repositories.
You can also add remote repositories. This is useful if you want to set up a proxy repository for
your company. This way your company users only have to access the internal repository and
don't need to go to the internet.
You can let Hudson poll the SCM and rebuild after every commit. You can also let it build periodical by specifying a cron job



archiva starts up with two hosted repositories configured:

    Internal

    The internal repository is for maintaining fixed-version released artifacts deployed by your organization, which includes finished versions of artifacts, versions that are no longer in development, and released versions. Note that in-development versions of the same artifacts may exist in the snapshot repository in the future.

    Snapshot

    The snapshot repository holds the work-in-progress artifacts, which are denoted with a version with the suffix SNAPSHOT, and artifacts that have not yet been released.
http://docs.oracle.com/middleware/1212/core/MAVEN/populating_archiva.htm


  •  Efficiency. Repository acts as a cache for Maven Central artifacts

    Resilience. Repository protects against remote repository failures or lack of internet connection
    Repeatability. Storing common artifacts centrally, avoids shared build failures caused by developers maintaining their own local repositories.
    Audit. If all 3rd party libraries used by development come from a single entry point in the build process one can assess how often they're used (based on download log files) and what kinds of licensing conditions apply.


http://stackoverflow.com/questions/8259118/good-configuration-for-archiva


  • By default, Archiva comes with a proxy to the mavenSW central repository. Therefore, it's basically all set up to be used as a mirror of the maven central repository. If we make a request of Archiva for an artifact for a central repository artifact, it will download that artifact and return that artifact to us. If we make future requests for that artifact, Archiva will return to us the copy of the artifact that it had already downloaded from the central repository.



http://www.avajava.com/tutorials/lessons/how-do-i-use-archiva-as-a-mirror-of-the-maven-central-repository.html