Glen Mazza's Weblog

Main | Next page »

https://web-gmazza.rhcloud.com/blog/date/20170218 Saturday February 18, 2017

Using AppleScript to quickly configure your work environment

At work, I use Mac OS' Script Editor to create and compile AppleScript scripts to quickly configure my desktop depending on the programming task at hand. Each compiled script, or application, I place in the desktop folder so it appears on my desktop and can be activated with a simple double-click.

Three tasks I commonly have that I include and adjust as needed depending on the task:

  • Activating a terminal window with tabs pre-opened to various directories and running various commands. A script that opens up three terminal windows in the specified directories, and optionally runs any commands in those directories, would look as follows (see here for more info):
    tell application "Terminal"
    	activate
    	do script
    	do script "cd /Users/gmazza/mydir1" in tab 1 of front window
    	my makeTab()
    	do script "cd /Users/gmazza/mydir2" in tab 2 of front window
    	my makeTab()
    	do script "cd /Users/gmazza/mydir3" in tab 3 of front window
    end tell
    
    on makeTab()
    	tell application "System Events" to keystroke "t" using {command down}
    	delay 0.2
    end makeTab
    
  • Running IntelliJ IDEA. Simple:
    activate application "IntelliJ IDEA"
    
  • Opening Chrome with a desired number of tabs to certain webpages:
    tell application "Google Chrome"
    	open location "http://www.websiteone.com/onpage"
    	open location "http://www.websitetwo.com/anotherpage"
    	open location "http://www.websitethree.com"
    end tell
    

Script editor has a "run" button allowing me to test the scripts as I develop them. Once done, I save the script both standalone (so I can edit it later if desired), but also export it as an application. Exporting it allows for a simple double-click to directly run the task, rather than bringing up the Script Editor and requiring the script to be run via the "run" button.

https://web-gmazza.rhcloud.com/blog/date/20150627 Saturday June 27, 2015

Git & GitHub Notes

  1. For ease, fork the repo on GitHub and clone your fork rather than the main repo, then

    git remote add upstream git@github.com:githubaccountname/githubrepo.git
    to link to main repo.

  2. git push origin branchName -- "origin" refers to the fork (here, one's own, else can be upstream or any other created via git remote)

  3. To switch between branches locally, git checkout <branch name> (use git branch to see a list); to check out a new branch based on the one you're currently in, git checkout -b <new branch name>

  4. git fetch upstream [branch or remote branch:local branch] - update pointers to latest code, git merge afterwards (or git pull to do both) to update local code. If any automated merge failures, right-click Git->Resolve Conflicts from IntelliJ to resolve manually.

  5. To place your changes on one branch above those from another: (from your branch): git fetch as above, then

    git rebase [branch name to retrieve from]
    , then
    git push origin brname -f
    to update GitHub fork.
  6. Recovering from an accidental pull: SO #1, SO #2. Another possibility: git rebase -i HEAD~xxx to supposedly squash the commits, but I instead removed all unwanted commits from the pull. Then git rebase to bring in actual wanted branch.

  7. To pull a different branch to your local machine:

    git remote add upstream git@github.com:githubaccountname/githubrepo.git
    git fetch upstream
    git co upstream/release/branchname
    git co -b [newBranchNameHere]
    
  8. To checkout a tag.

  9. To check out a coworker's branch:

    git remote add coworkerFork https://github.com/coworkerAcct/Repo.git
    git fetch coworkerFork hisBranch:newNameForYourLocalBranch
    
  10. In GitHub pull requests, use ``` or ```json to format code (JSON)

  11. In PR reviews, use : to bring up icons.

  12. Deleting a branch:

    git branch -D branchnames...

  13. List of repos presently watching: https://github.com/watching

  14. Squashing commits:

    git log
    to find number of commits to squash,
    git rebase -i HEAD~X
    (X is number of commits). 1st page: mark all but first commit "s" for squash, 2nd page, edit common commit message. Standard editor: Ctrl-K to remove lines, Ctrl-X to save. Then
    git push origin brName -f
    to update remote.

  15. git cherry-pick to copy commits from one place to another. Applying a commit from one branch to another:

    git log to copy sha of commit you want to merge
    git fetch upstream to download all available branches
    git co -b mynewbranch
    git pull upstream {branchYouWantToAddPRTo}
    git cherry-pick { sha value} -m 1
    git push origin mynewbranch
  16. How to merge a PR against one fork into your own
  17. Wanting to create a new branch without committing changes first:
    git stash, git checkout -b newBranch, git stash pop
  18. Undoing changes:
    git checkout <branch_copying_from> -- <file_to_reset>
  19. branch rewinding - another method for combining commits when the work was not done on a separate branch.
  20. JQuery Commits and Pull Requests Guide
  21. Checking out an older git version (snapshot)
  22. Adding Git Aliases -- simplify terminal window commands
  23. Efficient way to do commits:
    git log --oneline --decorate
  24. Which release contains a commit? git fetch --all followed by git tag --contains (hash)
  25. Amending commit messages

https://web-gmazza.rhcloud.com/blog/date/20140614 Saturday June 14, 2014

Creating a Java Swing alternative to JConsole for calling MBeans

In this article I show how to create a Java Swing application to interact with MBeans in a manner specific for your needs, reducing need to use JConsole.

[Read More]

https://web-gmazza.rhcloud.com/blog/date/20140203 Monday February 03, 2014

Creating Selenium tests for Java Web Applications

To reduce the amount of manual testing needed for the Java-based Apache Roller blog server, I added a Maven submodule that uses Selenium for automated in-browser testing. Presently only basic Roller functionality is being checked (create a user, create a blog, blog an entry, confirm the entry was saved), but I expect it to be filled out more over time. Its structure may be useful for other Java projects wishing to incorporate Selenium testing. The submodule POM relies on the Jetty Maven plugin to activate Roller, Brian Matthew's inmemdb-maven-plugin to activate an in-memory (i.e., no files created) Derby database instance, and finally the Maven Failsafe plugin to activate the Selenium tests, necessary as the tests run under Maven's integration-test phase.

To see Selenium in action, testing Roller (requires Firefox and Maven 3.0.5):

  1. Check out the Roller source using SVN:

    svn co http://svn.apache.org/repos/asf/roller/trunk roller_trunk
    
  2. Run mvn clean install from the roller_trunk (base) folder to build Roller and have it installed in your local Maven repository (from where it will be read by the Selenium tests). Building itself is quick (about two minutes on an average machine), however the initial download of Roller's dependencies, if not already in your Maven repo, could take some additional time.

  3. Navigate to the roller-trunk/it-selenium folder and run mvn clean install or mvn integration-test. Selenium will activate Firefox at Roller's home URL (http://www.localhost:8080/roller) and run its tests.

Some notes on creating Selenium-driven tests for web applications:

Using Selenium IDE to generate browser actions. Reviewing the nicely succinct documentation for Selenium IDE and Selenium WebDriver is a great way to get started. Selenium IDE is a Firefox plugin that records manual interaction with the application under testing ("AUT", using the documentation's terminology) into a script, which can then be activated from Selenium IDE to automatically re-run the same actions against the AUT. After adding testing assertions and verifications and confirming the script is moving through the AUT successfully, Selenium IDE can then be used to export the script as Java to incorporate into your WebDriver-backed Maven submodule. After becoming acquainted with the WebDriver Java API by working with a few exported files, you'll most probably find yourself able to code additional tests in Java directly without need for Selenium IDE.

Making adjustments to Selenium IDE scripts. Due to the manner in which Selenium IDE populates HTML form fields (perhaps by direct manipulation of the underlying HTML DOM document), certain mouse, key, and focus DOM events are not activated as they would be with manual data entry, resulting in necessary JavaScript not getting activated. For example, a submit button which would become enabled via JavaScript as a result of the data entry fields all being filled manually may remain disabled when Selenium IDE populates the form, making it unable to click that button and proceed. This occurred on one of Roller's registration screens -- the solution was to look for the DOM event in the JSP or generated HTML source which is needed to trigger the necessary JavaScript:

<tr>
...
    <td class="field"><s:password name="bean.passwordConfirm" size="20" maxlength="20" onkeyup="onChange()" /></td>
...
</tr>

<script type="text/javascript">
function onChange() {
    var disabled = true;
    var openIdConfig    = '<s:property value="openIdConfiguration" />';
    var ssoEnabled      = <s:property value="fromSso" />;
...

...and then add a fireEvent command within Selenium IDE prior to the command for clicking the Submit button:

Command:  fireEvent
Target:   id=register_bean_passwordConfirm
Value:    keyup

Note the Value above is keyup and not onkeyup; also, the target ID can be determined by having the browser display the HTML source for the page.

Exporting test cases (or a suite of test cases) into Java. Note that the Java exported cannot be reimported back into Selenium IDE for subsequent modification, although you can always save another copy of the test cases as HTML, load it into Selenium IDE for tweaking, and then do another export into Java. Also, exporting into Java is not strictly required (the Selenium Maven plugin used by Apache JSPWiki as shown here can work with Selenium IDE's default HTML), although I would not recommend HTML as you'll lose significant object-oriented coding advantages including code reuse.

The Selenium IDE File-->Export Test Case menu item provides three JUnit 4-based options:

  • RC - Uses the older Selenium 1 RC API.
  • WebDriver Backed - Uses Selenium 2's WebDriver to implement the Selenium 1 RC API. Good for transitioning from Selenium 1 to 2.
  • "pure" WebDriver - Uses Selenium 2's WebDriver API. I exported using this option, as presumably all new work should be based on Selenium 2.

In looking at the exported Java class(es), you may see commented "errors" about fireEvents (and possibly other commands) being unsupported, for example:

// ERROR: Caught exception [ERROR: Unsupported command [fireEvent | id=xxxx | keyup]]

This is usually not cause for alarm--the Selenium team decided not to support fireEvents in Selenium 2, feeling that WebDriver should instead internally fire the events that would occur if the data was entered manually. Alternatively, in certain cases testers can add actions that will cause those events to naturally activate. In my particular instance with the Roller submit button, it turned out no replacement coding was necessary as WebDriver, unlike Selenium IDE, was able to automatically fire the needed events based on the fields it filled. Note, worst case, it remains possible to execute JavaScript to fire the DOM events manually if the Java tests will not work otherwise, but before doing so, best to Google and/or search the Selenium Users Group with the specific "Unsupported command" message to see if a more standard solution is available.

Examine better ways to design tests. When working with the Java test classes, ways to improve their design using standard object-oriented techniques will become apparent. Foremost is moving to the Page Object design pattern (links). Thomas Sundberg's article shows the natural process of getting to that pattern by way of factoring out common functionality from the tests and additionally suggests using Cucumber for behavior-driven development. Some other suggestions:

  • Create an abstract base class for your page objects to handle common functionality--populating fields, validating screen titles, taking screen snapshots or logging the page source for errors, etc.

  • Although using the WebElement.click() method on form submits will normally halt processing until the next screen appears (and so far has always worked for me), the FluentWait object can also be used to explicitly halt Selenium until a specified HTML element on the new page appears (or a timeout you specify occurs).

  • For your page objects, create an additional multi-parameter constructor to allow for convenient creation of page objects in cases where the page is just being used to get to another page that is under testing. As such a page being activated with this constructor would not be under testing itself, just providing the minimum number of parameters in the constructor necessary to navigate to and test the desired page should be sufficient.

  • For time, accuracy, and efficiency, I would advise against turning your page objects into POJO's, with instance variables for each screen field and getters and setters for all fields. So far, I've added getters and setters for a field only when such a method is needed by a test case. Further, I'm not creating member variables in the page object for each widget, both to simplify the objects and out of fear that their values might deviate from what's actually on the browser screen. Instead, each accessor directly reads from or writes to the browser screen.

  • If you do wish to go the POJO route, take a look at the PageFactory object and @FindBy annotation, both described well on the ActivelyLazy blog.

  • In the Page Object model, when a submit button always moves the application from Screen A to Screen B, a typical Page Object method that your test classes will call will be as follows:

    public class LoginPage {
    ...
        public UserDashboardPage loginToApp() {
            // clickById() provided by AbstractRollerPage superclass
            clickById("login");  
            return new UserDashboardPage(driver);
        }
    ...
    }
    

    What do you do, however, if the subsequent screen could vary depending on the state of the application--for example, a login page might take you to a password-has-expired screen, a message notification screen if messages present, or the usual application screen if neither of the other cases hold? According to this article, it's recommended to have the page object implement different methods based on each different output possible, and have the test case call the appropriate method it's expecting based on the application state that it has created:

    public class LoginPage {
        ...other method above...
    
        public ChangePasswordPage loginToAppPasswordExpired() {
            clickById("login");
            return new ChangePasswordPage(driver);
        }
    
        public UserNotificationPage loginToAppUrgentNotification() {
            clickById("login");
            return new UserNotificationPage(driver);
        }
    ...
    }
    
  • Typically the constructor of a Page Object includes a sanity check verification that the page title is as expected (i.e., the WebDriver is actually on the page it is presumed to be on), throwing an exception if it is not. If you have multiple screens with the same title an alternative check based on an HTML element ID can be done. This will require that each page sharing the same title has a unique HTML ID attribute on an HTML element present only on that page, so you may need to have the page markup modified to include such an attribute.

Other Notes:

  1. To run multiple iterations of the same Selenium tests under different circumstances (e.g., using different security authentication methods), Juan Pablo of the Apache JSPWiki Team developed a WAR overlay method along with parameter filtering to configure each of the tests - check the JSPWiki IT Tests module to see the process.

  2. Functional Automated Testing Best Practices with Selenium WebDriver - presentation by Ben Burton

https://web-gmazza.rhcloud.com/blog/date/20130610 Monday June 10, 2013

Testing JPA entity classes with JUnit

Updated August 2013.

I created a Mavenized jpa_and_junit sample to show how one can test JPA entity classes against multiple JPA implementations--EclipseLink JPA, Hibernate, and Apache OpenJPA. The tests use Apache Derby as the database, either its file-based Network server mode or in-memory implementation. The source code can be obtained from GitHub by using either the download ZIP button or git clone -v git://github.com/gmazza/blog-samples.git command.

The pom.xml for this sample defaults to EclipseLink but can be changed by adding -P[Hibernate|OpenJPA] to all of the mvn ... commands below. As a sanity-saver, if you're always going to be using another JPA stack, best to instead modify the pom.xml by moving the <activeByDefault/> element from the EclipseLink profile to your desired one.

The integration tests can be run via mvn clean integration-test (or ...install) project can be built via mvn clean install. The integration tests activate a temporary in-memory Derby database using Brian Matthews' inmemdb-maven-plugin. No manual Derby database setup is therefore required, however the database will vanish after the tests are run.

Alternatively, while prototyping JPA entities you may wish to see the resulting data in the database tables while those entities are being used for CRUD actions, i.e., not create and delete a new database during each run. In that case, first activate Derby's Network server mode and use ij to load the sample's database tables. Next, update the url field in the SampleRun.java file to point to your database and then run mvn clean install exec:exec. While running the database tables can be periodically browsed to see how the data was inserted. Derby logging for the Network server can also be activated to see the SQL statements that were run and any SQL error stacktraces.

Besides Derby logging, the JPA stacks each offer their own logging capabilities in order to obtain JPA-related messages: (OpenJPA) (EclipseLink) (Hibernate), involving adding JPA-stack specific property elements to the META-INF/persistence.xml and/or creating a Log4j configuration file.

Further, the test cases (including any JPA stack code called) can be debugged in Eclipse. First use the Maven Eclipse Plugin to create an Eclipse project of the sample:

gmazza@gmazza-work:/mywork/jpa_and_junit$ mvn eclipse:clean eclipse:eclipse

...and then import the project into Eclipse. The .classpath file created by this plugin (viewable from the Eclipse Navigator view) will show that the project now links to the source files of the JPA stack you've configured. Debugging can then proceed in either of two ways:

  • If debugging the SampleRun class (i.e., using mvn exec:exec), start the Derby database and place a breakpoint within Eclipse within that source file. Right-click the file in the Eclipse Navigator or Project Explorer and choose Debug as->Java Application. The running source can now be debugged/traced as normal.

  • If debugging one of the integrated tests against the in-memory database, options include:

    • Add whatever breakpoints desired to the integration test classes in Eclipse. Then run mvn integration-test [-Dtest=TestClass[#TestMethod]] -Dmaven.failsafe.debug (use -Dmaven.surefire.debug for regular unit tests via mvn test). Next, create a new Remote Java Application having it connect at the default port of 5005 expected by the Maven failsafe plugin and then click "Debug". Program control will move to the integration test(s) where the breakpoints were added. This process can be repeated by re-running the Maven command in the previous step followed by right-clicking any file in the Navigator view, selecting Debug->Debug Configurations, highlighting the Remote Java Application just created and clicking "Debug" again.

    • More simply, run mvn integration-test -Dmaven.failsafe.debug from a command-line window to initialize the in-memory database. Then place breakpoints in Eclipse in an integrated test and right-click that test from the Eclipse Navigator and choose Debug As->JUnit Test. Debug as many integrated tests as desired in this manner; once finished, press Control-C in the command-line window to stop the in-memory database.

Debugging with IntelliJ IDEA works much the same. Upon running the integration tests with -Dmaven.failsafe.debug, add your breakpoint(s) to the Java code and select menu item Run | Debug... -> Edit Configurations. From the Debug dialog that pops up press the "+" button at the top-left and choose Remote from the treeview. Give this configuration a name, ensure it's listening on Port 5005 and the press the debug button.

Additional Resources:

Subversion Notes

Subversion:

  1. SVN cheat sheet
  2. svn:ignore propset - How to use files to populate the svn:ignore list (use --recursive to set for subdirectories as well, and how to revert if errors made.):

    For example:

    svn propset svn:ignore -F svnignore.txt . --recursive
    

    for an svnignore.txt file of:

    .project
    target
    .idea
    *.iml
    *.log
    

    Note: above change will reset the svn properties on *all* directories to just the contents of svnignore.txt, any other additional properties previously in those directories will be erased.

  3. svn propedit svn:ignore . to edit the svn:ignore list (convenient for single directory changes, no recursive option)
  4. Setting up global ignores - machine-level for all SVN projects.
  5. For remote repositories that have relocated, use the svn relocate command to sync local checkouts to the new repository URL.
  6. How to revert a file to an older version.
  7. To check out a specific tag, svn co https://{subversion URL}/svn/root/tags/x.y.z newfolder
  8. For creating a diff (patch) file between an earlier tag and trunk: svn diff https://xxxx/tags/yyyy https://xxxx/trunk

https://web-gmazza.rhcloud.com/blog/date/20130430 Tuesday April 30, 2013

DC ACM meetup on computer vision

Last night the Washington DC chapter of the ACM held a meetup at the New America Foundation featuring Dr. Larry Davis of the University of Maryland. He gave a broad overview of the history and applications of computer vision over the decades, successes and challenges, and current techniques and goals in the field. My notes from the meeting:

  • Early research in computer vision started in the 1960's by the U.S. Post Office, with the goal of having computers reading envelopes to speed letter sorting.
    • Started with machine generated labels (3rd class mail) but today can read handwriting
    • Three step process still common today with other fields of computer vision: segmentation (learning to parse words into letters), representation (defining the shape of each letter in a way the computer can compare against), and recognition (having the computer match what it scanned against its stored representations of the various letters in order to identify the character)
  • Automotive safety another field, with stereo vision (paper) used to detect pedestrians on the road, Mercedes-Benz Stereo Multi-Purpose Camera (SMPC)
  • Agricultural uses today: harvesting, food safety (checking for rotten vegetables, etc.), IBM Veggie Vision
  • Medicine: radiological screening, assistance during surgery
  • Surveillance: border and port security, biometrics
  • Successful startup PittPatt - facial recognition technology, acquired by Google
  • Defense Applications of Computer Vision
  • Continual research is on how to get machines to represent and then recognize natural objects -- most research today on using machine learning instead of storing collections of 3D models to compare against.
  • Caltech256 - stock images of 256 different items for use in testing computer vision recognition (Wikipedia)
  • Automatic Target Recognition (ATR)
  • Crowdsourcing techniques used to collect large databases of images: ESP Game, LabelMe, Mechanical Turk.
  • Sliding window object detection technique common (how modern day cameras identify faces in a picture), can add additional info about the object (e.g., for a car, that it is commonly on roads, next to other cars) to increase image retrieval.
  • Precision/Recall curve - more objects (e.g., cars) desired to identify from a photo, more false positives retrieved
  • Google Glass
DC ACM logo

https://web-gmazza.rhcloud.com/blog/date/20121224 Monday December 24, 2012

Working with Apache Derby

Summary: Provides an easy introduction to Apache Derby: setup, creating and populating tables, using the ij utility to make SQL statements, authentication, and running Derby in Network Server mode.

[Read More]

https://web-gmazza.rhcloud.com/blog/date/20121008 Monday October 08, 2012

My Ubuntu post-installation tasks

Updated December 2015.

The below lists my post-installation tasks for Ubuntu Linux. Once or twice a year I find myself needing to re-install Ubuntu, so I find maintaining a checklist that I update each time helps me go through the process more quickly. I've found that by keeping a new hard drive separated into three or four partitions (operating system, swap space, applications and data either together or split) --I can minimize the effect of an OS reinstallation, namely that just the OS partition needs to be wiped clean and reinstalled and my other two partitions will be usable as before. (Separating the application and data partitions has the additional benefit in that one may only need to back up the data partition.) Some files in the operating system partition, however, need to be re-configured after a re-install so the application partition can be working again, and I've made notes below where I've found that occurring.

I've found it too clumsy and hard to maintain a dual-boot (Windows and Ubuntu) hard drive, so I keep each on a separate SSD hard drive and swap out Ubuntu for the relatively infrequent times I need Windows. In turn, either SSD drive can be swapped between my faster desktop and slower laptop, so I can work from the former at home and switch to the latter when travelling.

The below files I keep on the data partition in a restoreFiles separate folder, to be re-inserted into the OS partition after an OS reinstall:

  • My .ssh key for GitHub
  • My PGP key for Apache
  • An export of my Thunderbird email filters, created by the Message Filter import/export plugin
  • .sh utility scripts to I use to run applications
  • Konversation IRC configuration files (konversationrc and konversation.notifyrc) located in ~/.kde/share/config.
  • Personal HTML home page containing frequently used links
  • Maven settings.xml file
  • .gitignore_global file and .subversion folder, located in the home folder
  • Within Tomcat, the JDBC drivers, the JavaMail JAR, and TightBlog and JSPWiki properties files in the lib folder and the Tomcat server.xml file.

Post-Ubuntu install Configuration Steps:

  1. Update Ubuntu with the latest from the 'Net: sudo apt-get update && sudo apt-get upgrade or via Software Updater.
  2. Install Skype, make sure it works using the Skype Echo/Sound Test Service
  3. Install HP LaserJet drivers
  4. Using Ubuntu Disks application, check mount point locations of the partitions created:
    • Not at desired mount point location (/work for me)? See here.
    • Not automatically mounted at startup? (check on subsequent boots) See here.
    • Check ownership of non-OS partition: ls -la from / folder, if root:root, change to login account (sudo chown -r gmazza:gmazza work).
  5. Configure Thunderbird:
    1. Add your mail account(s), and once done, pick desired default SMTP server.
    2. If you have mail filters, install the Message Filter Import/Export add-on. Import the previous message filters file and confirm the filters route to the correct mail folders.
    3. If you've saved your local mail files in a non-default location, right-click on Local folders, Settings, and change the directory to that location.
    4. If needed, create the signature lines for your email accounts.
  6. Files from the list at the top that need copying to home folder:
    • .ssh folder
    • .gnupg folder
    • .bashrc folder
    • .subversion folder
    • .gitignore_global file
    • Any custom .sh files used to start applications
  7. Applications that need re-installing:
    • sudo apt-get install git
    • sudo apt-get install subversion
    • sudo apt-get install dolphin
    • sudo apt-get install konversation
    • Reconfigure Konversation IRC client: Start konversation to create the ~/.kde/share/config folder automatically, then move konversationrc and konversation.notifyrc files to that folder
    • Install Google Chrome
  8. Set Firefox, Google Chrome to default home page.
  9. Install GitHub-Dark theme from userstyles.org.
  10. gedit changes:
    1. On the Edit->Preferences page, Disable gedit's backup copy option
    2. and set the tab width to 4 spaces, inserting spaces for tabs.
    3. Configure gedit to work well with Dolphin
  11. Create a Terminal profile (File->New Profile) named "HasTitle", based on the default profile. On Title and Command page, "When terminal commands set their own titles" item, set to "Keep initial title". This allows scripts which open multiple terminal tabs to have their tab titles kept.
  12. Switch Dolphin to details view (Menu item View -> Adjust View Properties), and have it always use gedit to open text files (in Dolphin, right-click a text file, "Open With...Other", and then select to always use Gedit.)

Development Application installation:

  1. Install Oracle JDK along with unlimited encryption
  2. Java plugin for Firefox: follow here and here to simlink to the Java version you downloaded, and test via this link.
  3. Install Tomcat (and configure tomcat-users.xml file)-- test it will start with tomcat.sh script. Move the TightBlog-specific configuration files over.
  4. Install Maven, replace the $MAVEN_HOME/conf/settings.xml with that from the restoreFiles directory.
  5. Install IntelliJ IDEA
  6. Install Derby
  7. Install SquirrelSQL (need to run java -jar), and in its lib folder add any desired JDBC JARs.
  8. Update the ~/.bashrc file to include paths to the above applications and other desired configuration. Then run source ~/.bashrc for the new values to take effect for the current terminal window. The below is what I add to the end of the default .bashrc file:
    export JAVA_HOME=~/work/jdk1.8.0_65
    export CATALINA_HOME=~/work/apache-tomcat-8.0.30
    export MAVEN_HOME=~/work/apache-maven-3.3.9
    export DERBY_HOME=~/work/db-derby-10.12.1.1-bin
    export SQUIRREL_HOME=~/work/squirrel-sql-3.7
    export IDEA_HOME=~/work/idea-IC-143.1184.17
    export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin:$DERBY_HOME/bin:$SQUIRREL_HOME:$IDEA_HOME/bin
    
    export JAVA_OPTS="-Xmx2048M"
    export MAVEN_OPTS=$JAVA_OPTS
    export CATALINA_OPTS=" -Xdebug -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n"
    
    # add common exclude filters to grep
    alias grep='grep --color=auto --exclude-dir={.svn,target,.idea} --exclude='*.iml''
    # for grep, switch from hard-to-read magenta to bright yellow (use fn=33 for dull yellow)
    # see: http://en.wikipedia.org/wiki/ANSI_escape_code#Colors
    export GREP_COLORS="ms=01;31:mc=01;31:sl=:cx=:fn=01;33:ln=32:bn=32:se=36"
    
    # (MacOS only) Show current git branch at command-line
    if [ -f $(brew --prefix)/etc/bash_completion ]; then
      . $(brew --prefix)/etc/bash_completion
    fi
    
    export GIT_PS1_SHOWDIRTYSTATE=1
    export PS1='\[\033[01;32m\]\u@\h\[\033[01;34m\]\w\[\033[01;33m\]$(__git_ps1)\[\033[01;34m\] \$\[\033[00m\] '

Source code download (need write access for these URLs):

  1. Apache CXF: svn co https://svn.apache.org/repos/asf/cxf/trunk cxf
  2. Tightblog: git clone git@github.com:gmazza/tightblog.git
  3. Jersey samples converted to CXF: git clone git@github.com:gmazza/jersey-samples-on-cxf.git
  4. Blog samples: git clone git@github.com:gmazza/blog-samples.git

https://web-gmazza.rhcloud.com/blog/date/20110115 Saturday January 15, 2011

DocBook Resources

In this blog entry I share some helpful resources and tips for working with Docbook.

[Read More]


Valid HTML! Valid CSS!

This is a personal weblog, I do not speak for my employer.