Monday, December 16, 2013

Apex Test Methods - SeeAllData

One topic that I want to touch on is the SeeAllData attribute.  I thought to include it in my other post about test methods but decided to give this one it's own space.  First, let's get a definition on it:

***
Starting with Apex code saved using Salesforce.com API version 24.0 and later, test methods don’t have access by default to pre-existing data in the organization, such as standard objects, custom objects, and custom settings data, and can only access data that they create. However, objects that are used to manage your organization or metadata objects can still be accessed in your tests such as:
User
Profile
Organization
AsyncApexJob
CronTrigger
RecordType
ApexClass
ApexTrigger
ApexComponent
ApexPage
Whenever possible, you should create test data for each test. You can disable this restriction by annotating your test class or test method with the IsTest(SeeAllData=true) annotation.
Test code saved using Salesforce.com API version 23.0 or earlier continues to have access to all data in the organization and its data access is unchanged.

***

I've seen and heard many people claim that setting this attribute to true was a bad practice.  Go ahead and google it.  You'll find plenty of well-intentioned people telling you that you should never (or very rarely) set it to true and warn you like they were saving you from some kind of doom.

I say that there are plenty of reasons to use it and removing this from your tool set, puts you at a disadvantage.  Let's look at a couple examples of where I see some utility:

  • Existing org data
    • Volume
    • Quality
  • Custom settings
Setting SeeAllData to false (or accepting the current default), limits your test methods from existing org data. So for example, if you were to query for an existing account and set a field value in your test method, you'd end up with a query exception.  If you were the developer of a managed package, you'd probably be smart enough to recognize that this scenario and that your account doesn't exist in your customer's org, so you'd need to create a new account in your test methods.  In this case, SeeAllData = false, makes sense because you cannot assume that your test data exists in someone else's org.

If you were the developer of your company's internal Salesforce instance, your perspective and use cases are likely to be different. Imagine you've created a vf page that aggregates some account data.  For a page that does a lot of soql queries, you may be concerned about performance or governor limits.  If your org had millions of accounts, you could make your test methods try to simulate your account data volume, but it might be smarter to make use your existing data in your full sandbox also.  Running apex tests for both new and existing data, you'll be more confident in how your application behaves in production.  

Data quality is another place where setting SeeAllData = false may leave you vulnerable.  Let's say your org's history was something like this:
  1. In January, your users started using opportunities
  2. Later in March, they created a validation rule to make a field required on opportunities
  3. In June, you were asked to come in and add a trigger to activities to automatically update the related opportunity
If you were using SeeAllData = false, your test methods would create a new account and opportunity, create a related Task, and successfully assert that the Opportunity was updated by your new trigger.  Your test opportunity would have been created with all of the required data and your trigger would pass your unit tests.  However, the trigger would fail in production, perhaps unexpectedly, because you didn't know that there were January opportunities that had not been updated after the validation rule was introduced.  If you were using SeeAllData = true, you may have caught this data inconsistency and been prepared to handle the exception cleanly.

I think custom settings are another place where I'd look to use the SeeAllData.  In much of my apex code, I try to make the code flexible through a custom setting.  So, for example, if I am integrating via an http callout, I'd try to put the endpoint url in a custom setting.  Using SeeAllData = false, I could succesfully deploy to production without ever setting up the custom setting because my test methods create the custom setting. However, using SeeAllData = true, my test methods would fail because the custom setting had not been created in production.  This allows me to be sure that my dependency is in place before bringing users back into the system.  

So, there you have it.  SeeAllData = true can be your friend.  It's like bacon - plenty of people will tell you to avoid it, if you want to live.  But I say a little bit in your life can be delicious.
  

Deep Thoughts on Apex Test Methods

You're good enough.
You're smart enough.
You can write a good apex test method.

I just completed a major rewrite of all test methods for a client and while it was painful at times, I think it puts them in a position to extract some value from what was previously just a production deployment hurdle. Trust me, I'm not yet a full test-driven-development convert but I do believe that you can help your business automate some testing, and maybe even save some cash, if you take the time to think about your testing and apply it to your test methods.

You get better at the things you do over and over and writing good test methods is certainly something you can expect to have plenty of opportunity to practice.  There are some great resources out there to be sure you are repeating good habits. I'd start with Dan Appleman's Advanced Apex book as he has some great ideas for test class writing. Some other articles that I think are helpful and instructive are:

http://jessealtman.com/2013/09/proper-unit-test-structure-in-apex
http://wiki.developerforce.com/page/How_to_Write_Good_Unit_Tests

With this recent rewrite effort, some of the good practices I've incorporated are:

  • Moving test methods from functional classes into separate Test Classes
    • This allows you to decouple your tests from your functional classes, which excludes your tests' ability to reach private methods.  This will give you some flexibility with refactoring your code without being tied down by your test methods.
  • Asserting results
    • While you may achieve the 75% goal of code coverage, your tests will have little/no value if you are not actually checking expected versus actual results.  As a managed package developer, you'll also get flagged during the security review if you are not asserting your results. 
  • Centralizing and standardizing helper utilities
    • Like other apex you write, you should try to encapsulate where you can and use helpers to minimize the effort in testing various permutations of data against your code.
  • Testing negative scenarios
    • This is another area where you can get some value out of unit testing.  While it may take 80% of your effort to identify and code these, it's going to yield lots of value in improving your code and confidence in your code handling atypical scenarios.
  • Testing as an end user
    • Unfortunately as developers, we are system admins and almost everything we test works as expected because we don't have to deal with sharing or role hierarchy or object/field visibility. However, in the real world, our users are almost never system admins, so testing as a system admin makes no sense.

This is certainly not the complete list of best practices, but it's a good start.

As with other things in Salesforce, there is some room for improvement with the execution of unit testing in the application. In particular, I'm still frustrated by what Jeff Douglas calls the "black art".  Like Jeff, I've come across some peculiar behaviors that can be maddening.  For example, if you are testing a trigger and need to create a user and then test the dml, there's a pretty good chance you're going to get a mixed DML exception.  What is maddening, however, is that the error will not be caught in the force.com ide.  Oh, and you can deploy to production with this too!  The only thing keeping me sane was knowing that someone else noticed this too:



And aside from the nonsense of trying to estimate your code coverage in any of the tools out there, it would be nice if the test classes had the code coverage estimation like the functional classes for those of us separating them:


And don't get me started on the new test execution screens.  Aside from queuing your tests, they provide no value!

I think there is plenty to like about putting some effort into doing proper unit testing in Salesforce.  I'm sure if you build in the time to your sprints/plans, it will pay dividends in the long term.  Just be sure you approach this with a good sense of humor.


Wednesday, November 20, 2013

Traffic Nightmare @ Dreamforce

This is incredible..last year, the SF Giants were in the playoff hunt so downtown was already jamming. I can't even imagine trying to drive anywhere with over 140k registered attendees this year:

Monday, November 18, 2013

Drink the Kool Aid, It's DreamForce Season Again!

Salesforce1 appears to be the big announcement this year.  I just watched this video that Salesforce published and while the music and voice over are great, I don't understand a thing about what is being announced.  If you can make sense of this video, please share in the comments.


Wednesday, November 13, 2013

A Bug!

It's not often that I come across a real bug with Salesforce's apex or visualforce platform.  Most often there are limitations or shortcomings that you have to workaround.  Recently, I had to make an urgent change to a trigger and it's associated test class.  However, when I attempted to comment out a line in my test class, I got the following error in the editor when I tried to save:

java.lang.reflect.InvocationTargetException

When I attempted to make the same change in the developer console, I got another error:

 An unexpected error has occurred. 421011484-16071 (1420197083) for deploymentId=1drJ00000002FDxIAM If this persists, please contact customer support.


Fortunately, I was able to still deploy my code without the test class change but I opened a case anyway and after waiting a few days for a reply, was told that it was a known issue.  The instructions from developer support were:
  • Please clear Test results and try to save the code:
    • From Setup, click Develop | Apex Test Execution |View Test History | Clear test results. 

However, even before I did these actions, I tried to update the test class again and surprise!  no errors. So, something fishy is going on... Support wants to close the case but I'm inquiring for additional details.  Will keep you posted.


**Update Nov 13**

Salesforce has responded and indicated that it was a bug but has been fixed.  Details here: https://success.salesforce.com/issues_view?id=a1p30000000T17j

Friday, November 8, 2013

Workflow and User Permissions

Q: Do workflow rules run as the user or do they run as the system?  For example, if you had a sales team associate update an opportunity and there was a workflow that fired on any opportunity edit, would the workflow update a field that the user did not have profile (or permission set) permission to update?

Q: If the workflow action reassigned ownership to another user, would it execute the ownership change despite the user's system permission of Transfer Records as false?

Q: If the workflow action changed the record type to a value, would it change the record type if the user's profile did not have access to the specific record type value?

***

My initial reaction was that workflow would run as the logged in user and would obey the user's profile and permission sets.  However upon testing, what I found was that workflow runs as the system and does not honor the user's profile or permission.  So, for the 3 questions above:

  1. Workflows run as the system and would update a field that the user did not have profile/permission set access to update
  2. Workflows will execute ownership changes on behalf of users who do not have permission to directly change the ownership
  3. Workflows will change record types in spite of profile-specified record type access.



Wednesday, October 30, 2013

JQuery Tablesorter, Meet PageBlockTable

I am definitely late to the jquery party.  In the last year or so, I've been able to harness jquery to make pages more usable and functional and just finished a poc for another beautiful jquery solution for a recurring visualforce requirement: sortable tables.

A colleague and I were doing a peer review on a visualforce page he had built.  The page had a sortable pageblocktable, which he had enabled with a custom compare function he had built in his controller.  Now, I've seen all kinds of ways of doing sorting in the controller and have done some suboptimal server-side sorting, but I never really thought about trying to keep it client-side.  I figured there'd be a way with javascript but I just didn't have it in me to try to code it up.  So, I poked around the google and sure enough, there's a jquery plugin already built to do it.  And, sure enough, another Salesforce developer, shared her solution using the tablesorter plugin years ago.

So, a few years late to the party :)

Anyway, I wanted to share my variation since I was able to get the tablesorter to work w/ the standard apex component pageblocktable.  Again, with jquery, the solution is basically the following:

1. Import your library as a static resource
2. Reference your resource in your vf page
3. Bind your jquery function and your component

So, applying the parts to my poc, I have a page that looks like this:

*******

<apex:page standardController="Opportunity" tabStyle="Opportunity" extensions="myext" id="thepage">
<apex:includeScript value="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"/>
<apex:includeScript value="https://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"/>
<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/ui-lightness/jquery-ui.css" type="text/css" media="all" />
<apex:includeScript value="{!URLFOR($Resource.tablesorter, 'jquery.tablesorter.min.js')}"/>

<script type="text/javascript">
    $j = jQuery.noConflict();    
    $j(document).ready(function () {
    $j("[id$=theaddrs]").tablesorter();

    });

  //some other unrelated js

</script>

<!-- some other visualforce stuff then the heart of the proof of concept: -->

<apex:pageBlock id="theaddrsblock">

                    <apex:pageBlockTable value="{!Addrs}" var="a" id="theaddrs" styleClass="tablesorter" headerClass="header">
                    <apex:column>
                            <apex:facet name="header">
                                <apex:outputText styleClass="header" value="{!$ObjectType.Address__c.Fields.Street__c.Label}" />
                            </apex:facet>
                            <apex:outputText value="{!a.Street__c}" />

                        </apex:column>

<!-- the other columns, closing tags, and that's it -->


******

There is nothing to share about the controller because the sorting is being done w/out a callback to Salesforce.

The sections highlighted in yellow show how little is needed to modify your standard pageblocktable into a sortable table.  Not much, right?

I suggest looking at the documentation or online discussions about optional parameters that can be specified in the tablesorter library but if you just have a few fields in a table that you need to sort, the tool will do a great job of figuring out the data magically.  It's really awesome.

Now, this is only a proof of concept and so there is at least one issue resolve:  the icons to indicate sort direction are displaying on top of the column labels.  Should be able to modify the css to offset the icons.  Worst case, we just remove the styleClass attribute on the outputtext of the column facet.  Anyway, if you don't have to send it back to the controller for some logic or because the dataset is too large, just use the plugin to sort!


Thursday, October 24, 2013

Custom Button to Run Your Entry Criteria before Approval Submission

A sorely lacking feature with Salesforce approvals is the entry criteria rejection message.  If your record does not meet the entry criteria for a given approval process, you get a very generic

Unable to Submit for Approval
This record does not meet the entry criteria or initial submitters of any active approval processes. Please contact your administrator for assistance. 


with no indication of what is missing!  Users hate this.  I mean, HATE this.  They did not configure the approval process and so they've got no idea why they can't submit for an approval.  If the organization is large enough, the sales ops and/or IT help desk gets involved.  This to me is total lunacy and I see it everywhere.  Imagine the productivity you could restore if you provided your users with some meaningful information.  There is an idea you can vote up, if you agree.

There are some options to work around this limitation in the product.  One that I've been playing with is the idea of moving the entry criteria rules into one or many validation rules that only run when a field is set.  For example, you could have a Validated__c flag and create "entry criteria" validation rules to only allow it be set if your entry criteria are met. The flag could then be used by the approval process entry criteria and moves the logic back to the object where it can be surfaced.  Depending on your business, you could use workflows/triggers to uncheck the flag, should something change prior to approval submission.

It was pretty easy to move the entry criteria into validation rules but where I struggled was getting a button to set the field for me and still display the validation rule errors on the standard layout.  I was thinking that a "Validate" button would be more intuitive than setting the flag manually, so here's what I tried:

  1. Button click -> javascript to set field and save 
    • validation rule errors only captured in js but raised through an alert box
  2. Button click -> apex class to set field and save
    • validation rule errors only captured in js, again, raised in alert box
  3. Button click -> url params to set field and auto-save(?!)
    • almost there!

With options 1 and 2 the javascript is only able to surface the errors with an alert.  If your rules are simple, this is probably viable and acceptable for your users.  However, if you have 10 fields that are required for an approval process, your users are probably not going to write down each of the fields they need, then dismiss the alert, then fix the fields and submit.

With option 3, I was looking around to see if there were any non-visualforce options when I came across this article.  The idea was simple: use a custom button to invoke the edit mode for the record and prepopulate the Validate__c flag.  Additionally,  if you add Save=1 to your url, Salesforce could automatically save the record!  It only took a few minutes to configure and everything worked beautifully except the auto-save.  Apparently, Salesforce has disabled the Save parameter, so for now, our users have to click the Validate button, then Save.  Not bad.

The bottom line is that there are options to improve the usability of the entry criteria in your approval process.  You do not have to live with the generic error and you can certainly improve the productivity of your team by moving some of the logic into validation rules.



Friday, October 18, 2013

Draggable and Resizable Modal Popup

One common visualforce solution that I've built for several clients is a modal popup.  At it's core, it is nothing but a hidden outputpanel that is dynamically rendered.  With some styling, you can display the panel "above" your current page, with the current page grayed out.  There are hundreds of blog posts out there covering how to do this: here's one, another, yet another.

These blog posts are incredibly helpful and have inspired me to pass it forward and share the incremental bits that I can.  One thing that I've wanted to do that I just got working in my sandbox was to make these modal popups draggable and/or resizable.  As I've come to learn, jquery makes this ridiculously easy.

If you look at the source code for the draggable example on jquery's site, you'll see that it's 3 parts:
  1. The references to the jquery library
  2. The jquery function
  3. The div that you want to make draggable
Applying this to Salesforce, there are the same 3 parts:

1. A reference to the jquery libraries.  You should probably use static resources, but if you're just testing it out, something like this needs to be on your page:

<apex:includeScript value="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"/>
<apex:includeScript value="https://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"/>
<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/ui-lightness/jquery-ui.css" type="text/css" media="all" />

2. Add the jquery script and don't forget the noConflict() requirement.  In my example, I have a div called "pop" that I want to make draggable and resizable.

<script type="text/javascript">
    $j = jQuery.noConflict();
    $j(function() {
    $j( "[id$='pop']" ).draggable().resizable();
});
</script>

3. The div/outputpanel becomes something like this:

<apex:outputPanel id="popupBackground" styleClass="popupBackground" layout="block" rendered="{!displayPopUp}"/>
        <apex:outputPanel id="custPopup"  layout="block" rendered="{!displayPopUp}" >
        <div id="pop" Class="custPopup">
        <!-- your form/pageblock/fields/tables go here-->


Part 3 depends on your styling but it should be pretty easy to apply to your modal popup.  If all goes well you should be able to move your panel around and resize it.  Enjoy!

Tuesday, October 15, 2013

Salesforce formula with CONTAINS

Recently, I learned an interesting detail about the CONTAINS method that can be used in formula fields.  According to the inline formula help, CONTAINS is defined as follows:

CONTAINS(text, compare_text)
Checks if text contains specified characters, and returns TRUE if it does. Otherwise, returns FALSE

Let's say you needed to check for multiple values and for each, set your formula to some other value. The obvious path would be to nest your CONTAINS in a case statement, right?  Something like this, maybe:

CASE(My_Field__c
  CONTAINS(My_Field__c, 'Some Value'), 'New Value'
  CONTAINS(My_Field__c, 'Some Other Value'), 'New Value', etc...)

WRONG!  Turns out, you can't use CONTAINS with CASE, as confirmed here.  Ugh.  So, plan B, might be to use CONTAINS with a nested IF, right?  Yes, it works, but the problem you may run into is that if you are checking for many values, you may hit the maximum size for the formula field, which at the time of writing, is 3900 characters.  

So, when I was researching this, I came across this obscure knowledge article.  What caught my eye was the following:

Example 2:
a. CONTAINS("CA:NV:FL:NY",BillingState)
Will return TRUE if BillingState is CA,NV,V,L,FL:NY or any exact match of "CA:NV:FL:NY".
NOTE: when using contains with the multiple operator (:) contains then becomes equals.

The colon operator allows you to inspect many values, without the overhead of nesting IF-statements. This seems to be very well suited for checking the standard BillingState field, where values will be relatively uniform.  The key difference is the highlighted note indicating the change of function when using the operator.  In the provided example, if you had C, A, N, V, L, F, or Y, it would return true.  But, if you had California, or even CALIFORNIA, it would return false.  By contrast, if you had used a nested-if, you could have introduced some additional flexibility in finding California, CA, Cali, NoCal, SoCal, etc, sacrificing some of your character limit.  So, the takeaway for me is this: if your data is pretty uniform and structured, use the : operator, otherwise use the nested-if. 

Wednesday, October 9, 2013

Multiple Addresses - A Larger Question

One interesting aspect of Salesforce is the address concept.  Since Salesforce is a software (er, no-software) company, they don't really ship anything, right?  Everything is delivered via the cloud.  This perspective seems to have influenced how they model addresses.  Out-of-the-box, addresses are merely attributes of an account and contact in Salesforce.  So, for businesses who actually ship things, like widgets, to other businesses, how does this out-of-box model work?  Say, you are a widget maker and you have big customers, with many locations that consume your widgets.  How should you capture where you're selling your product and where it is sent?

Imagine you have a customer who's organization looks something like this:

  • Joe's Plumbing Worldwide
    • Joe's Plumbing Canada
    • Joe's Plumbing America
      • Joe's Plumbing New England
        • Joe's Plumbing Boston
        • Joe's Plumbing Hartford
      • Joe's Plumbing Chicago
      • Joe's Plumbing Los Angeles
    • Joe's Plumbing Europe
      • Joe's Plumbing France
      • Joe's Plumbing England
If you were selling widgets to Joe's Plumbing, what is important for your business to capture?  Does it only matter that you are selling to "Joe's Plumbing Worldwide"?  Do you have regional team's that support Joe's Plumbing in language?  Do your team's "own" these accounts and selling into them? When you report on sales and service, do you want to measure the selling and servicing at the regional level?  Is your customer data provided or enriched by any 3rd party services?

The account concept is central to Salesforce crm and so the decisions you make around how you model your customers, is significant.  Many appexchange products and services, like address verification, assume that you use the out-of-the-box address fields.  

There are several options to support multiple addresses and each of the options I've listed below can have some variation but the important take-away is that your approach has some implications to think through.

Option 1: Use the native Account hierarchy
Option 2: Create a custom object to hold Addresses
Option 3: Denormalize shipping addresses onto Opportunities/Orders
Option 4: Add additional shipping address fields onto your Account object

For example, if you go with option 1, do your shipping records mean anything?  Should your teams own these records?  Do you need to restrict the ability to create opportunities to just the parent account? Do you want to restrict the ability to create contacts or tasks to just the parent account?  If not, is your reporting ready for a hierarchy of data to roll up?

If you like option 2, do you need to report on shipping information?  Does your address data need to be verified or enhanced by a 3rd party and if so, does that 3rd party support your custom address object?

As is usually the case, there are many ways to solve the problem.  It's a matter of figuring out what is the best fit for your business and thinking through the implications of that decision.


Thursday, October 3, 2013

Salesforce Quotes


There is no doubt that the Salesforce Quote functionality is useful.  But could this object be any more specialized?  We've run into so many issues with customizations.  Here's a list of some issues or limitations I've recently come across:

1. You cannot override the standard Quote view with a custom visualforce page.  The option to override View is not available.
  • I suspect that because the object is so specialized (oppty sync, PDF creation, etc) that this is not likely to change.  If you need custom quote functionality, you may need to build your own from the object up.

2. The standard Discount field is not available as a merge field in Email templates.
  • As a workaround, you can create a formula field that references the out-of-box field and then use the formula field in your email template.

3. You cannot roll up list price from the line item.
  • Apparently, list price is a reference to the PricebookEntry object and so it is not a field that can be summarized.  To proceed, you need to create another currency field and populate it with a workflow on the QLI based on your business requirements.

4. You can only control edit access to Sales Price from a Profile-level parameter.
  • It's not so bad to have to use this parameter, since you can also put it in a permission set, but it's pretty inconsistent from the way the rest of the profile/page layouts behave.

5. If your record is locked as the outcome an approval process, you will not be able to save your pdf to the quote using the Create PDF button.
  • Speaking of inconsistent... on all other objects that are locked, you can almost always add an attachment.  To get around this, I changed our approvals to unlock the record, then used validation rules to keep the QLI from being changed.


The Create PDF button on Quote

I was asked about the ability to restrict the creation of a PDF for a given quote until a couple business rules had been met.  As I thought about the solution, a couple ideas came to mind:

1. Add a record type for the ok Quotes and assign that record type a new layout that includes the Create PDF button.  Remove the Create PDF from the other page layout.
2. Modify the behavior of the existing Create PDF button to check the status first.
3. Create a new VF page to replicate the Create PDF functionality.

Given the timing and other project considerations, I opted for #2, if it was feasible.  I knew that #1 would work but I was worried about introducing a record type and then having to roll it back when another quote-related project went live.

To start, I had to figure out if I could see the code that the out-of-box button was calling to render the PDF.  With chrome it was pretty easy to inspect the element and see the code the button was calling.

With a minor bit of additional javascript, the quote's status can be interrogated first and an informational alert raised, if certain business criteria are met:

/*********************************************************************
if('{!Quote.Status}'!= 'XYZ' ) 
{
// do some business logic...
var isOk = true;  
} if(isOk) 

var pdfOverlay = QuotePDFPreview.quotePDFObjs['quotePDFOverlay']; 

pdfOverlay.dialog.buttonContents = "<input value=\'Save to Quote\' class=\'btn\' name=\'save\' onclick=\"QuotePDFPreview.getQuotePDFObject(\'quotePDFOverlay\').savePDF(\'0\',\'0\');\" title=\'Save to Quote\' type=\'button\' ><input value='Save and Email Quote' class='btn' name='saveAndEmail' onclick=\"QuotePDFPreview.getQuotePDFObject(\'quotePDFOverlay\').savePDF(\'1\');\"; title='Save and Email Quote' type='button' ><input value=\'Cancel\' class=\'btn\' name=\'cancel\' onclick=\"QuotePDFPreview.getQuotePDFObject(\'quotePDFOverlay\').close();\" title=\'Cancel\' type=\'button\' >"; 

//change this to use the correct template for your business/environment!! 
pdfOverlay.summlid = 'XXXXXXXXXXXXX'; 

pdfOverlay.setSavable(true); 

//change this to use the quote id 
pdfOverlay.setContents('/quote/quoteTemplateDataViewer.apexp?id={!Quote.Id}','quote/quoteTemplateHeaderData.apexp?id={!Quote.Id}'); 

pdfOverlay.display(); 

else 

//raise an alert to let the user know about some business rule
alert('The Quote requires XYZ before the PDF can be generated.'); 
}

*************************************************************/




Friday, August 30, 2013

A Riddle, Wrapped in a Mystery; Inside an Enigma

Do you ever see inconsistent results when running unit tests in a sandbox versus running them in production?  Or, inconsistent results when running unit tests between sandboxes?  Well, the black-box that is the Salesforce unit test, just became blacker and boxier to me.

I was trying to deploy a patch to production and was going through our normal build path that passes through our full sandbox.  When attempting to deploy to the full sandbox, our automated build process failed on a test method that we had not changed in any way.

The error: ... System.TypeException: Invalid date/time: 05/05/2010 00:00 AM stack...

A colleague and I inspected the method and the class and determined that they were identical in our sandbox and production.  When we ran the individual test in our full sandbox, it failed, and when we ran it in our production environment, it passed.

What the?!

The error pointed to this fragment of code in the test: datetime.parse('05/05/2010' + ' 00:00 AM');

Not sure why it was 00:00 AM so I updated it to 12:00 AM and was able to proceed with the deployment.

I was unsettled by experience because we had been able to deploy to this sandbox two weeks earlier and had not had this test method fail.  So, we opened a case with Salesforce and asked their "premier" support to explain it.  Our first support rep reproduced the results but gave us a canned response that it wasn't a good practice to use 00:00 AM.  Fine, we said, but tell us why it passes production tests but not tests in our full sandbox and what exactly changed in our sandbox?  We went on and on like this for a couple days, escalated the case, and are now doing the same dance with another "premier" support rep.

For those of you who read this, please try this at home and let me know if you can replicate this in your sandboxes:

Create a test object called TestDate__c.  Add one custom field called SomeDate (type = date/time).

Create an apex class (api version 27, for consistency sake) with the following code:

global with sharing class TestingDate
{
    public static testMethod void testMyObj()
    {
        TESTDate__c myobj = new TESTDate__c();
        myObj.SomeDate__c = datetime.parse('05/05/2010' + ' 00:00 AM');
        insert myobj;
    }
}


Run your test.  The results we got were as follows:

NA14 (developer org): Pass
CS15: Fail
CS12: Pass

If nothing changed on CS15, how can this be?  Still waiting for an answer...

Wednesday, August 28, 2013

Winter 14 Highlights

Winter 14 previews are being applied to sandboxes over the next week or so.  I've gone through the release notes and pulled out some notable items.  Before getting to these items, a couple general reactions to the release notes:

  • Salesforce is getting massive.  I mean, between the Sales & Service features, Chatter updates, API changes, and all of the new ".com" products like Data.com, Work.com, Desk.com, and Social.com, it's a wonder that anyone is able to keep anything straight, including Salesforce employees.
  • Chatter keeps getting the lion's share of enhancements.  Since 2010, it seems that every quarterly release is stuffed with Chatter updates with a few core CRM improvements thrown in to keep the masses happy.  I've been doing Salesforce.com work since 2008 and only know of a few clients who use Chatter regularly.  I understand the academic appeal of Chatter but given the realities of email in business, can you really believe that integrating a canvas app into a Chatter stream is more valuable than providing a true M:M relationship between Accounts and Contacts or providing a better and more flexible Salesforce/Outlook/Exchange plug-in?  

Without further delay, here are some notable features coming our way in middle October with Winter 14:

User Object Sharing
I was surprised to find out that this object was not previously governed by sharing.  There are some interesting use-cases where you'd potentially need to hide/share user details.  Curious to see how this works if the org-wide default is set to Private with Chatter-enabled orgs.  Also notable, Apex managed sharing is not supported for this object.

Approval Emails with Comments
This is one of those "duh" features that has taken a backseat to Chatter.  Finally, you can add the approver's comments to an approval email notification without having to build some trigger or vf-based email.

Embed a report within a standard detail layout
I would also categorize this under "duh".  Finally, remove the link to the report and just put the report in the layout.  Much better, though it seems there are some limits, such as, only 2 charts per layout.
Sandbox Updates
Configuration-only sandboxes are being renamed to "Developer Pro" and will have storage limits bumped up from 500 MB to 1 GB.  Developer sandboxes will continue to be called "Developer" but will also get a bump in storage from 10 MB to 200 MB.

Developer Console
I do most of my development in Eclipse but if I'm away from my laptop, I use the developer console as it is a vast improvement over the standard code editor.  A couple highlights here:

Have you ever had a complex process to debug?  Well, you probably ran into a log limit, right?  Just before discovering the source of your headaches, you see text like "Log limit reached"... ugh!  You can now override log levels for a specific trigger or class.  So you could turn logging down by default and then turn it up on the class or trigger where you suspect the problem resides.  Sweet!

One new feature that caught my eye was this debug tool in the console.  It looks like it will graphically display the order of execution.  Very nice!


Visualforce Updates
VF has some HTML 5 updates that should prove useful.  One that stood out to me was the new <apex:input> tag.  You can specify the type by using a "type" attribute.  The browser, using HTML 5 standards, should render the input based on the type.  One use case where this should help is with input dates.  If you needed to provide a date input, without binding it to a Salesforce object's field, you had to do some hokey workaround to get the date picker to display.  Now, you should be able to just do something like <apex:input type="date">.

There is also support for a "list" attribute to add to the input tag.  It looks like this should provide visualforce with some autocomplete capability that you could previously do with some jquery magic.  For picklists, we'd typically generate a list in the controller, then bind the list to VF through selectlist/selectoption tags.  I guess we'd use the input tag and list attribute in cases where we'd need the autocomplete on and where we wouldn't need some of the selectlist/selectoption features like displaying but disabling values.

Apex Changes
The maximum number of code statements has changed from being a flat number to CPU time-based.  Previously this was 200k, which is suprisingly easy to hit, when you have some nested loops.  The new limit is based on whether you are synchronous or asynchronous (batch Apex).  They've advertised this as removal of a limit, but in reality, they've just changed how the limit is computed.  Seems like it'll be harder for a developer to anticipate CPU utilization.

The new Database.getUpdated() method allows you to pass in the object, and the start and end date/time and will return to you a list of updated records.  That should be useful!

There is a new BusinessHours static method that allows you to pass in a date/time and an id to a Business Hours object to see if something is within the business hours and to fetch when the next available business hours start.  That should be handy.

Other
Environment hub?!  I need to see screen shots to determine the actual utility, but it potentially sounds useful. Particularly if you are an admin for multiple orgs with multiple instances.

***

When the changes are actually applied to my personal sandbox, I'll do some mockups of some of the new VF stuff.  Stay tuned.


Friday, August 23, 2013

Salesforce Sandbox Instances


A while back, I was on a client site that was affected by a sandbox outage that lasted over 3 days. Unfortunately, both of our full copy sandboxes were on this instance and a good number of our developer/configuration sandboxes were as well.  Needless to say, we were greatly affected by the outage and when we asked about migrating some of our sandboxes to another instance to reduce the risk for our team, we were told that it was not possible.  Today, again, there is another major outage, this time on CS15, and again, most of our development, testing, and full copy sandboxes are affected. It seems crazy to me that customers who seek the cloud for protection against this type of outage do not have the ability to specify that they want their sandboxes distributed across multiple instances.  If we have a break/fix situation, we are in a difficult situation given that our development path is dependent on CS15 being operational.  Customers should be allowed to spread their developer sandboxes across instances to reduce the impact of another outage.

Vote for the idea here.

Tuesday, August 13, 2013

Required InputTextArea

I was asked to make an inputtextarea required on a visualforce page and thought, "ok, no problem, 2 minutes!".  In reality, this was kind of a nightmare to implement.  My vf page looked something like this:

<apex:pageblock id="pageBlock">
<apex: pagblocksection id="pageBlockSec">
<apex:pageblocksectionitem>
<apex:outputlabel value="Big Field">
<apex:inputtextarea value="{!myObject__c.Big_Field__c}" required = true id="bigfield"}"
...

So the first issue is that the iconic redline next to the required field does not display for text areas.  To fix this, you have to wrap the tag with an outputpanel like this:

<apex:pageblock id="pageBlock">
<apex: pagblocksection id="pageBlockSec">
<apex:pageblocksectionitem>
<apex:outputlabel value="Big Field">
<apex:outputPanel styleClass="requiredInput" layout="block">
<apex:outputPanel styleClass="requiredBlock" layout="block"/>
<apex:inputtextarea value="{!myObject__c.Big_Field__c}" required = true id="bigfield"}"
</apex:outputPanel>
....


When unit testing, the error that is displayed is something like:

pageBlock:pageBlockSec:j_id38:bigfield: Validation Error: Value is required.

This is not a message a user would understand, even with the text area marked with a red line.  Some of the initial searches turned up crazy solutions like using jquery to clean up the message, or rewriting the validation to occur within the controller.  So, it took a while but the solution was buried in this thread.

To remove the garbage text in the error, you have to provide the textarea a label attribute.  Your final markup will look something like this:

<apex:pageblock id="pageBlock">
<apex: pagblocksection id="pageBlockSec">
<apex:pageblocksectionitem>
<apex:outputlabel value="Big Field">
<apex:outputPanel styleClass="requiredInput" layout="block">
<apex:outputPanel styleClass="requiredBlock" layout="block"/>
<apex:inputtextarea value="{!myObject__c.Big_Field__c}" required = true id="bigfield" label = "Big Field"}"
</apex:outputPanel>
....

Friday, August 9, 2013

Auto save with TinyMCE and Salesforce

I'm on a project where our users have large numbers of long text areas and are often working on old-ish computers.  One common complaint was that they'd spend a good amount of time typing something only to find that they'd lost it when they clicked Save.  We explored some options with some features within HTML 5, like local storage, but settled on trying auto-save.  Auto save is a common feature in most online services like blogger or gmail, so this seemed like a good model to follow.  Additionally, it seemed less browser dependent, which is also good.

The system is composed of a Visualforce page using TinyMCE as the rich text editor.  TinyMCE has a (somewhat buggy) function that will tell you if an editor instance "is dirty".  We set a timer to check this attribute every few seconds.  If the attribute returns true, we can execute a controller method via javascript remoting.

Putting it all together:

A visual indicator (to show the user that the auto save ran):

<div id="saving" class="minitext" style="color:#666; font-style:italic; display: none">AutoSaving...</div>

The timer:
var $j = jQuery.noConflict();  
// First we tell this to run when the page is loaded
$j(document).ready(function()
{
  $j(function()
  {
    setInterval("auto_save('{!$Component.theform.thepageblock.thepbs.somefield}')",5000);
  });

});


The auto_save function referenced in the timer:
function auto_save(inputs)
{
  // First we check if any changes have been made to the editor window
  if(tinyMCE.getInstanceById(inputs).isDirty())
  {
    $j('#saving').show();
    var content = tinyMCE.get(inputs);
    var notDirty = tinyMCE.get(inputs);
    saveTestField('Some_Field__c', content.getContent());

    notDirty.isNotDirty = true;
   }
else
  {
    $j('#saving').hide();
    return false;
  }


The saveTestField function:
function saveField(fieldName, fieldValue) 
{
    Visualforce.remoting.Manager.invokeAction('{!$RemoteAction.AutoSave.updateFieldValue}', recId, fieldName, fieldValue, 
        function(result, event){
            if (event.status) {
            } else if (event.type === 'exception') {
                console.log(event.message);
            }
        }, 
        {escape: true}
    );


The Apex method:
@RemoteAction
public static Boolean updateFieldValue(Id recordId, String fieldName, Object fieldValue)
{
String sObjectName = recordId.getSObjectType().getDescribe().getName();
String queryString = 'Select Id, ' + String.escapeSingleQuotes(fieldName) + ' From ' + String.escapeSingleQuotes(sObjectName) + ' Where Id = \'' + String.escapeSingleQuotes(recordId) + '\'';
Sobject record = Database.query(queryString);

record.put(fieldName, fieldValue);
update record;

return true;

}


Observations:

  • UI validations are completely ignored.  If you have a required field in your UI, the field will still commit, without error.
  • The auto save only commits the field passed to the controller.  No other field is saved.

Issues

  • One issue that I observed while assembling this was that the isDirty attribute wasn't always reliable.  It turns out that you have to tell TinyMCE to save at some point for the attribute to reset itself.  You can do it by adding this line: tinyMCE.get(inputs).save();
  • Another issue: this recipe works well for existing records (records with Id).  If you're inserting a new record, you'll need to modify accordingly.

Credits
This was assembled with some helpful tips and tricks from other folks sharing their solutions.  Thank you for sharing!
https://github.com/pbattisson/Visualforce-Autosave/
http://webnv.net/2008/02/10/autosaving-with-jquery-and-tinymce/

Opportunity Contact Role "Trigger"

Among the many limitations of the standard OpportunityContactRole object in Salesforce is the inability to put a trigger on it.  The use case I wanted to solve was to aggregate and copy the contact information related to certain roles to the parent opportunity.  Some options I considered:

1. Create a batch job to query OCR nightly and update the parent opportunity
2. Create a trigger on Opportunity updates to fetch the OCR data
3. A combination of 1 and 2
4. Replace the standard OCR with a custom OCR and create the necessary trigger

I finally came across this blog post, and with some tinkering, found that it worked for our use-case. The idea is to hide a inline visualforce page within the standard layout and use that vf page's action property to run some code.  The author chose an asynchronous opportunity update whereas I went with a synchronous update.  The action property on the page tag is generally used for redirects, but could certainly be used for defaulting data as well.

Some issues I came across:

1. Since visualforce is served up from a different domain, I was seeing my page redirected by a servlet to a "page not found".  I was able to stop this by returning a non null page reference.
2. The js console in chrome logs some refusal errors due to the different domains.  This went away with the resolution to #1.
3. I was unable to refresh the parent page from the inline vf page after the update, so the user has to manually refresh the opportunity to see the effect.  This is still the case but our users were ok with it.
4. The hidden vf page's controller is trying to execute a DML.  This is usually followed by a page refresh.  The problem is that the page refresh is occurring within the inline VF, which,when it renders, again executes the controller's action.  This can cause some serious looping, which is why it is imperative for the action method to check whether an update is required or not.


Thursday, August 1, 2013

Visualforce Page with ContentType in MultiByte Language

I'm working on a project where there are several multibyte languages, like Chinese and Arabic, that are supported.  One issue that we discovered in a new feature is related to exporting some content as a Word file.  The issue was in specifying the filename.  As you probably know, the syntax for something like this is:


<apex:page Controller="yourController" contenttype="application/msword#yourfilename" .. />

This works great but if you are substituting yourfilename with something from the controller, one symptom you could see is the file generated with the name of your VF page.  So, let's say you were doing something like this:

<apex:page Controller="yourController" contenttype="application/msword#{!someVar}" .. />

If {!someVar} is a value that is in Chinese or Arabic, your file name will probably look like "YourVFPage.doc" instead of "{!someVar}.doc".

There wasn't much in the Salesforce support community so I've come up with a workaround:

1. I added a charset identifier to the contenttype attribute like so:

<apex:page Controller="yourController" contenttype="application/msword#{!someVar};charset=utf-8" .. />

When you try to view your file, you get a little closer - your filename will likely look like: "------.doc".

2. The following thread, gave me the idea for the fix for ---- characters.  In the controller, we just encode the someVar value like so:

  String someVar = EncodingUtil.urlEncode(myString, 'UTF-8'); 
  return someVar;

When you try to view your file, you'll see the multibyte value in the filename.  You may have to do some substitution to remove any other characters, but this should get you closer to a user acceptable solution.

Tuesday, June 18, 2013

MultiSelect and JQuery/Javascript

I had a requirement to give users a way to quickly select a couple of values in a multi-select picklist based on some other value they had previously selected on the form.  In our case, if the selected Language was X, then pick Region 1, Region 3, and Region 4 from the multi-select picklist automatically.

This one took a while to piece together so hopefully this will help someone out.

On my VF page, I created an anchor tag to act like a "Command Link":


  <a href="javascript:void(0);" id="selectAllRegions">[Select All Regions for Language]</a></span>

Using some jquery, I catch the click as follows:

  j('#selectAllRegions').click(function () {
        selectRegions('{!$Component.regionsMS}');

        });

The regionsMS id is the id of your multiselect apex:inputfield.

The "selectRegions" function does the following:

function selectRegions(objId){
    var multiSelect;
    var unSelectedId = objId + "_unselected";
    var selectedId = objId + "_selected";
   
    multiSelect = document.getElementById(unSelectedId);
        for (i = 0; i < multiSelect.options.length; i++) {
            if(multiSelect.options[i].text == "Region 1"){
            multiSelect.options[i].selected = true;}
        }

        javascript:MultiSelectPicklist.handleMSPSelect(objId);
}

The key to making this work was this last function, which came up in a search here (there's some other NSFW stuff there, just fyi).  If you use firebug or chrome's developer tools, you'll see how the script interacts with the elements that make up the multiselect control.

And there you have it - when you click the link "Select All Regions for Language", Region 1 is selected.  All that needs to be done now, is evaluate the selected language and then change which region values are selected.

Friday, June 7, 2013

Rich Text Editors

On a recent project, I was asked to implement a visualforce page with rich text editing capability.  As I've come to learn, when you put an <apex:inputTextArea> tag inside a visualforce page, you don't get a rich text editor.  You get a text area.  No toolbars for formatting, unlike the standard layouts.  The standard layout editor appears to be ckeditor but if you're using visualforce, you have to put in the rich text editor yourself.  Fortunately, there are lots of editors out there.  I had no idea.

Based on the project needs and features, I ended up implementing the following for user evaluation:

  1. TinyMCE
  2. CKEditor
  3. NicEdit
  4. Redactor

TinyMCE ended up being the selected editor, mostly because of the ICE plugin.  I use strikingly to power my website and strikingly uses TinyMCE as it's editor.  It's fairly straightforward to implement:


  1. Download TinyMCE and upload the zip it as a static resource
    1. If you're using ICE, you'll need to include it in your plugin directory, rezip, and upload
  2. Update you visualforce page as follows
<apex:includeScript value="{!URLFOR($Resource.tinymce, 'tinymce/jscripts/tiny_mce/tiny_mce.js')}"/> 
...

<apex:inputTextArea value="{!TEST__c.Some_Field__c}" id="somefield" style="width:100%;" styleclass="mceEditor"/>
<!--this is the initialization required for TINYMCE  -->
                <script type="text/javascript">
                   tinymce.init({
                            mode : "textareas",
                            editor_selector :"mceEditor",
                            theme : "advanced",
                            plugins : "ice,icesearchreplace,spellchecker,pagebreak,style,layer,table,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template,visualchars,wordcount",
                            theme_advanced_buttons1: 'ice_togglechanges,ice_toggleshowchanges,iceacceptall,icerejectall,iceaccept,icereject,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,|,styleselect,formatselect,fontselect,fontsizeselect',
                            theme_advanced_buttons2: 'spellchecker,cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,forecolor,backcolor',
                            theme_advanced_buttons3: 'tablecontrols,wordcount',
                            theme_advanced_buttons4: "",
                            theme_advanced_toolbar_location: "top",
                            theme_advanced_toolbar_align: "left",
                            theme_advanced_toolbar_location : "top",
                            theme_advanced_statusbar_location : "bottom",
                            theme_advanced_resizing : true,
                            ice: {
                                          user: { name: '{!$User.Alias}', id: '{!$User.Alias}'},
                                          preserveOnPaste: 'p,a[href],i,em,strong'
                                },
                            width: "100%",
                            height: "200"      
                        });
                </script>

I highlighted a couple sections that are noteworthy:
  • You need to mention that you're using TinyMCE so you'll need the <apex: includeScript> tag
  • You can have multiple TextArea fields on your page and selectively enable TinyMCE using the editor_selector property when you initialize the editor.  Just set the styleClass property on your text area fields that you want to override with TinyMCE.
  • If you're using ICE, the modification here allows you to capture the user who edits the text.
And if all goes well, you should have something like this:

Deployment Failure

I recently assisted an organization with developing a couple of triggers to help them roll up some child data onto the parent records.  On the evening we were set to deploy, we ran into a couple issues, both of which could be filed under "WTF":

1. Change Sets were disabled for the organization
2. Deploying via Eclipse generated over 150 errors in the managed packages that were installed in their org

The client's system admin cases opened for both issues - the first with Salesforce, the second with the managed package vendor.

The response from the managed package vendor was illuminating so I wanted to share.  They forwarded us this community thread in which the question of managed package errors was settled:

http://boards.developerforce.com/t5/Apex-Code-Development/Unit-Test-Code-Coverage-and-Managed-Packages/m-p/471121#M86324

In summary: if you deploy with change sets, managed package code is ignored.  If you deploy with Eclipse, you're out of luck if there are test class failures in the managed package.  

Will update the blog with Salesforce's explanation of issue #1.  

Tuesday, May 21, 2013

User Hierarchy based Sharing

I have a client who has flattened out their role hierarchy to enable some other business processes within Salesforce.  So when it came for them to implement an object in Salesforce that required some hierarchy based sharing we had to build something custom based on the user record's manager hierarchy.  For example, if Joe owns a record, his manager, Jane, should have read access to the record.  Jane's manager Mike, should also have read access.  This access should continue to the top of the hierarchy.

In the code below, keep this in mind:

Joe -> Jane -> Mike -> Mary

In our org, the sensitive records that require Private org wide defaults give record owner's read-write access by virtue of their ownership of the record.  For all others to have read access, we have to create sharing records for the object.

A share is composed of the following elements:
  • ParentId - the record that you want to share with others
  • RowCause - the custom reason you are sharing this record
  • AccessLevel - the level of access being given (in our ex: read)
  • UserOrGroupId - the user (or group) who should have access

By creating a record with these required elements, these private records can be shared with those users (or groups) mentioned in the shares.

Programmatically, it is pretty easy to create a trigger on an object and create a related share for the object's owner.  Salesforce's documentation of apex managed sharing does an adequate job of this.  Where I scratched my head a little was figuring out how to share all the way up the hierarchy.  And do the sharing without hitting any governor limits.  I did some searching of blogs and found an interesting post by Jeff Douglas.  Jeff's solution is elegant, but I wanted to try another way myself.  In plain english here is what I wanted to do:

1. Create an object share record for the record's owner's manager
2. Then, create another object share for that manager's manager
3. and so on...
4. Insert list of object share records

It felt like a loop to me, so what I tried initially was to iterate through the object share collection, and for each record in the collection, create another share record, assign the manager, then add it to the collection and continue until a manager is no longer found for the user.  In this snippet, the collection of all users and their managers is held in a map:

         for(User u: [Select u.Id,u.ManagerId from User u])
        {
            mapUserManagers.put(u.Id,u.ManagerId);
        }

for(someObject__Share os: sharesToCreate)
        {
            if(mapUserManagers.get(os.UserOrGroupId) != null)
            {
                someObject__Share os_mgr = new someObject__Share ();
                os_mgr.AccessLevel = MGR_ACCESS;
                os_mgr.ParentId = es_mgr.ParentId;
                os_mgr.RowCause = MGR_ROW_CAUSE;
                os_mgr.UserOrGroupId = mapUserManagers.get(os_mgr.UserOrGroupId);
                sharesToCreate.add(os_mgr);
            }
        }

This approach throws an error:  "Cannot Modify a Collection While It Is Being Iterated"

Some folks over at stackexchange explain the issue with the index behind this loop.

So, I took a slightly different approach to the loop:

        allUsers = [Select u.Id,u.ManagerId from User u];
        for(User u: allUsers)
        {
            mapUserManagers.put(u.Id,u.ManagerId);
        }
        Id managerId;      
        for(SomeObject__c so: RecsToProcess)
        {
            managerId = mapUserManagers.get(so.OwnerId);
            do{
                SomeObject__Share sos = new SomeObject__Share ();
                sos.AccessLevel = SHARE_MGR_ACCESS;
                sos.ParentId = so.Id;
                sos.RowCause = SHARE_MGR_ROW_CAUSE;
                sos.UserOrGroupId = managerId;
                sharesToCreate.add(sos);
                managerId = mapUserManagers.get(managerId);
            } while (managerId !=null);
        }
             
        //allow for partial successes
        Database.SaveResult[] srList = Database.insert(sharesToCreate, false);

This compiles without issue and achieves the effect of creating a collection of share records for each of the managers with only 1 SOQL call.  It's another way to solve the hierarchy loop challenge.  Hope it helps you find a solution to your own problem.