Saturday, October 31, 2015

How to use Windows authentication to connect to database from webserver if they are not in trusted domain or firewall blocks the windows authentication.

It is quite normal in asp.net applications to store the user name and password credentials to the database on webservers WEB.CONFIG File and often a penetration test review highlights this as a major issue. I am not a great fan of obfuscation and encryption as it does not give any performance advantage and two way encryptions can be cracked if someone is really serious about it. The quick alternative is to rely on windows authentication by marking “Integrated Security=SSPI” in the connection string instead of user name and password. It is quite easy to achieve if both SQL server and Web server are in the same trusted domain.

I had come across cases where infrastructure admin places the SQL on isolated box with no domain awareness or windows authentication to the SQL box is explicitly blocked by fire wall rule. The simple windows authentication to connect to sql box now became difficult. 
There is a way to get around this issue by using account mirroring feature in windows and it is explained as follows. The trick is to create a low privileged local user on both machines with same credential and allowing sql access to this user.

Create a Local User on web and SQL server Machines with same credentials
  • Create a Local User in Web server and make sure that password never expires is selected.
  • Go to Administrative Tools> Computer management> Local Users and Groups> Users Folder and create a new local user “test” (make sure Password never expires is selected)


  • Perform the Same operation above on SQL server box and make sure the username and passwords match.
Assign the Local User on web and SQL server with roles
  • Add the local user to the IIS_IUSRS group. (You can either use Computer management console or running aspnet_regiis.exe -ga “test” in command line).
  • Local Users and Groups> Users Folder and select the new user “test” and open the properties. Now Select "Member Of" tab to add Roles to the local user "test".
  • I have added user to the “Remove Desktop Users” to allow initial login and this role will be removed after initial remote login using the "test" user credentials.
  • make sure same steps to add roles to "test" user are performed in SQL server machine as well.
  • Login to Web and SQL machine using "test" user (RDP or direct login). This will activate the user for use.
Now we have the local user setup complete, we can move onto using this users identity for the IIS Apppool.

Set up IIS Apppool Identity
  •  Open the IIS manger console and select advance settings of the app-pool

  • Open the Identity value field to set custom identity instead of ApplicationpoolIdentity.
  • Now enter the local windows user credential for user "test".

  • Since the App pool identity is set to the local user credential the sql server can be accessed setting the windows authentication. For Authorisation part we need to allow the mirrored windows user account (i.e. the test account created on SQL box) to access SQL server resources.
  • Create an SQL Login for the newly created local user "test"
USE master
GO
/*create login*/
CREATE LOGIN [SQL-Computer-Name\test] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]
GO
  • Create a SQL user on the database of interest mapping to the sql user login .i.e "test"
USE [DatabaseofInterest]
GO
/*create user*/
CREATE USER [SQL-Computer-Name\test] FOR LOGIN [SQL-Computer-Name\test] WITH DEFAULT_SCHEMA=[dbo]
GO
  • Create a sql role with limited previlege and assign to the sql user
/*create new role*/
CREATE ROLE [WebsiteLeastAccessRole] 
GO
/*grant access to role*/
GRANT EXECUTE, SELECT, INSERT, UPDATE, DELETE ON SCHEMA::dbo TO WebsiteLeastAccessRole
GO
/*assign the role to user*/
 EXEC sp_addrolemember 'WebsiteLeastAccessRole', 'SQL-Computer-Name\test';
GO

This completes the set up to allow connecting web and sql server using windows authentication credentials. This will reduce the chance to store the sql credentials at webserver reducing the chance of compromised credentials. But the responsibility is now to the network admin to ensure the network is secure enough and if the web machine is compromised the attacker can fire commands to sql as well.

    Sunday, July 6, 2014

    Creating a simple address finder using google place autocomplete

    There are a number old asp.net forms applications out there with data entry forms. If by any  chance you are bound to use a web system with lots of user demographic information capture then you are left with no choice but to type the details out. Having stated so there are a number of ways you can add value to an existing application and improve user experience without changing much of the existing data capturing fields and associated logic. Using location based services will be one of the quick wins and this post will detail about  creating an address finder user control.

    Googles’ Place Autocomplete is used in this post and you do not need to go through the hassle of getting specific API key for non-commercial or low usage scenario. Read google place autocomplete documentation for more information. I have existing asp.net web application which uses address entry fields as seen below
    Same address fields are sitting in the application in different areas with different validation criteria’s and logic wired to these fields.  One of the easy ways to bring user friendliness is to introduce an auto address populate control shown as below.

    Once the user selects the recommendations the address data will be populated back onto the existing address fields. Here I am not modifying the existing controls or code , instead adding some additional capability to populate address fields faster. This approach mitigates the risk of breaking any existing feature and avoided much of regression test.

    We can now look how to achieve this in the following steps  
    • Include the following javascript in masterpage of the application
    <script type="text/javascript" src="https://maps.google.com/maps/api/js?sensor=false&libraries=places&language=en-AU">
    </script>

    • Create a user control AddressAutoComplete.ascx with following mark up
    <div>
    Address Search<asp:TextBox ID="AddressTextBox" runat="server" Width="450" />
    </div>
    • Add the following javascript code to the user control mark up page(we are using query and the following code works on old jquery versions as well)
    <script type="text/javascript">
    $(function () {
     if (typeof google == 'undefined' || typeof google.maps.places == 'undefined') {
       //Make Sure the Library has loaded. Else Hide the autocomplete Box.
        $("#<%=AddressAutoFillRow.ClientID %>").hide();
      } else {
          //Setting up Auto Complete for Address
      var residentialAutocomplete = new google.maps.places
                                       .Autocomplete($('#<%=AddressTextBox.ClientID %>')[0], {});
      //Australia biased results
      var bounds = new google.maps.LatLngBounds(new google.maps.LatLng(-44.21370990970204,      110.7421875), new google.maps.LatLng(-9.188870084473393, 154.6435546875));

      residentialAutocomplete.setBounds(bounds);
      google.maps.event.addListener(residentialAutocomplete, 'place_changed', function () {

         var place = residentialAutocomplete.getPlace();
         var $data = new Array();
         for (var i = 0; i < place.address_components.length; i++) {
           $data[place.address_components[i].types[0]] = place.address_components[i].long_name;
         }
          var st = '';
          if ($data != undefined) {
              if ($data['street_number'] != undefined) st = $data['street_number'];
              if ($data['route'] != undefined) st += " " + $data['route'];
              if ($data['subpremise'] != undefined) st = $data['subpremise'] + " / " + st;
          }
          $('#<%=this.AddressLine1ClientId %>').val(st);
          $('#<%= this.CityClientId %>').val($data['locality']);
          if ($("#<%=this.CountryClientId %> option:contains('"+$data['country']+"')").length>0){
              $('#<%=this.CountryClientId %> option').filter(function () { 
               return ($(this).text() == $data['country']); }).attr('selected', true);
          } else {
                   $('#<%=this.CountryClientId %>').val("");
             }
            $('#<%=this.StateClientId %>').val($data['administrative_area_level_1']);
            $('#<%=this.PostCodeClientId %>').val($data['postal_code']);
          });
         }
       });
    </script>

    •  Code behind will look as follows and “ViewStateProperty” attribute from my earlier post is used to maintain state for properties in view state  

    public partial class AddressAutoComplete : System.Web.UI.UserControl
    {

    [ViewStateProperty]
            public string AddressLine1ClientId { get; set; }
            [ViewStateProperty]
            public string CityClientId { get; set; }
            [ViewStateProperty]
            public string StateClientId { get; set; }
            [ViewStateProperty]
            public string PostCodeClientId { get; set; }
            [ViewStateProperty]
            public string CountryClientId { get; set; }

            protected override void OnInit(EventArgs e)
            {
                base.OnInit(e);
                if (!IsPostBack)
                {
                    this.AddressLine1ClientId = CityClientId = StateClientId 
                      = PostCodeClientId = CountryClientId = string.Empty;
                }
            }
    }

    Now the control is ready to use in web pages; now we do that by registering control on page and setting the ClientId properties of the user control
    <%@ Register Src="~/AddressAutoComplete.ascx" TagName="AddressSearch" TagPrefix="cc" %>
     <cc:AddressSearch ID="AddressAutoCompleteControl" runat="server" />
    [the old mark up for address control sits here] 

    In the code behind file aspx.cs file
    protected void Page_Load(object sender, EventArgs e)
    {
      if (!IsPostBack)
        {
         AddressAutoCompleteControl.AddressLine1ClientId = txtAddress.ClientID;
         AddressAutoCompleteControl.CityClientId = txtCity.ClientID;
         AddressAutoCompleteControl.StateClientId = txtState.ClientID;
         AddressAutoCompleteControl.PostCodeClientId = txtPostCode.ClientID;
         AddressAutoCompleteControl.CountryClientId = ddCountry.ClientID;
       }
    }

    Saturday, March 8, 2014

    Creating concatenated delimited string from a SQL result set and avoid character encoding when using “FOR XML PATH”

    There are a number of ways of getting a column result set into a single delimited string. If it is a function then you can use single variable assigning method or STUFF tsql function. The STUFF function in conjunction with FOR XML PATH code is the most easy usage if you have to use as an inner query. But it automatically html encodes string in certain cases (see case 2 bad example) .

    Lets look at a simple exaple of  number of country names

    Case 1. Variable based solution 
    declare @delimitedCountryName varchar(max)
    set @delimitedCountryName=''
    select @delimitedCountryName+=case when len(@delimitedCountryName) > 0 then +','+ CountryName else CountryName end
    FROM Country

    select @delimitedCountryName

    Case 2. The XML path and stuff system function based solution
    //BAD: because if country name contains special characters then it will be automatically html encoded (e.g. if country name is 'papua & new guinea' it will be shown as    'papua &amp; new guinea' 
    select stuff(
    (select ', ' + countryname
    from Country
    for xml path('')
    )
    , 1, 1, '') as delimitedCountryName;


    // GOOD approach (either use case 1 or below option)
    select stuff(
    (select ', ' + countryname
    from Country
    for xml path(''), root('MyString'), type
    ).value('/MyString[1]','varchar(max)')

    Monday, January 13, 2014

    Creating PDF documents from webpage response using ABCpdf

    ABCpdf is a handy third party (paid) tool to convert any form of documents to PDF files. It is also observed that in number of instances developers being asked to generate PDF files based upon a web page response. Here is the sample code to implement PDF generation from a target aspx file response.

    In this example I am demonstrating how to effectively use ABCpdf in generating pdf files from an aspx page response; it could be any url that spits out html data. We can create GeneratePdf method in the asp.net web application and leverage HttpResponse Object to write out the pdf.

    Let’s assume the url is http://loaduserdetails.aspx/id=1 and we our api once developed will allow us to call the code: GeneratePDF(“is http://loaduserdetails.aspx/id=1”);

    public void GeneratePDF(string url)
    {
      var pdfGenerator = new PdfGenerator();
      byte[] dataBytes = pdfGenerator .GetPdfDataBytes(pageUrl);
      Response.ContentType = "application/pdf";
       Response.AddHeader("content-disposition", "inline; filename=member_profile.pdf");
      Response.AddHeader("content-length", theData.Length.ToString());
      // ensure that the response is not cached. That would defeat the whole purpose
      Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1));
      Response.AddHeader("Cache-Control", " no-store, no-cache   issue fix
      Response.Cache.SetNoStore();
      Response.BinaryWrite(dataBytes);
      Response.End();
    }
    The PdfGenerator class will have dependency on ABCpdf and for illustration purposes I am using ABCpdf version8. The resulting PdfGenerator class code will look as below
        using WebSupergoo.ABCpdf8;
        public class PdfGenerator
        {
            private Doc _pdFdoc;
            public PdfGenerator()
            {
                this._pdFdoc = new Doc();
            }

           public byte[] GetPdfDataBytes(string aspxPageURL)
            { 
                //disable page caching
                _pdFdoc.HtmlOptions.PageCacheEnabled = false;
                _pdFdoc.HtmlOptions.UseNoCache = true;
                _pdFdoc.FontSize = 12;
                //load the document
                ifthis.LoadDocument(aspxPageURL))
                {
                    // load the footer
                    AddDocumentFooter();
                }

             var dataArray= _pdFdoc.GetData();
            _pdFdoc.Dispose(); 
            return dataArray;        
            }   
            private bool LoadDocument(string theUrl)
            {
                bool success=false;
                var pDdoc = new Doc();
                pDdoc.Rect.Inset(50, 50);
                try
                {
                    var docPageCount = pDdoc.AddImageUrl(theUrl, true, 800, true);
                    while (true)
                    {
                        if (!pDdoc.Chainable(docPageCount))
                        {
                            break;
                        }
                        pDdoc.Page = pDdoc.AddPage();
                        docPageCount = pDdoc.AddImageToChain(docPageCount);
                    }
                    _pdFdoc.Append(pDdoc);
                    success=true;
                }
                catch (Exception err)
                {
                    //handle error
                    _pdFdoc.AddHtml("<b>Unable to render page</b>");
                }
                finally
                {
                    pDdoc.Dispose();
                }
                return success;
            } 

      private void AddDocumentFooter()
            {
                _pdFdoc.Rect.String = "30 790 600 30";
                _pdFdoc.Rect.Position(0, 30);
                _pdFdoc.HPos = 1.0; // Align text right
                _pdFdoc.FontSize = 10;
                for (var i = 1; i <= _pdFdoc.PageCount; i++)
                {
                    _pdFdoc.Page = i;
                    _pdFdoc.PageNumber = i;
                    _pdFdoc.Font = _pdFdoc.EmbedFont("Tahoma");                
                    _pdFdoc.AddText("Document Generated by MyApplication  Page " 
                                                                        + i + " of " + _pdFdoc.PageCount);                
                    _pdFdoc.Flatten();
                }

            }

        }

    Friday, November 29, 2013

    SQL Tips for writing better TSQL query with performance in mind

    A few months back, I had my chance to get my hands dirty with some old set of code written much before I even got into the IT space. My task was to add a bunch of new features on top the existing code base.  I had in fact spent a few hours looking at the exiting code (back-end in particular) and to my shock it had used things which will reduce query performance, increase complexity and attribute to poor maintainability. Over the time everybody ignored it until we started paying the price for the rush job done to get the product to market. The impact came mostly as customer complaints and not as audit wake up call and in either case we have to resolve it sooner.

     It’s often hard to convince the business owners the need to optimizing code for the future maintainability as from their stand point there is no return on investment in the short term. On the other hand it is much easier to educate the product maintenance team about the benefits of following some good guidelines which in most cases results in maintainable and optimized code. Here I want to articulate on the tips to improve the SQL performance just from a query perspective as in most cases the tuning is done using optimizing Indexes, reducing fragmentation, partitioning etc. The intention is to cover a few tips to follow while writing or modifying the existing stored procedures which may improve performance and reduce unwanted server resource consumption.

    1. Use the actual column names in SELECT statement instead of using ‘*' Symbol

    1. Use stored procedures or parameterised queries. (Facilitates SQL plan and data cache reuse)
    2. Suppress number of rows affected message
    SET NOCOUNT ON
             4.  Keep Transact-SQL transactions as short as possible within a stored procedure.
    1. Try to use fixed length data types in Where clause.
      

    1. Offload tasks, like string manipulations, concatenations, row numbering, case conversions, and type conversions etc. to the front-end applications wherever possible.
    2. Avoid using Temporary tables and  where ever possible use table variables instead (table variables are not a good a candidate in transaction scenarios)
    3. Avoid searching using NOT equals operators (eg: NOT and <>) as they result in table and index scans instead of Seek.
    4. Perform all your referential integrity checks and data validations using constraints (foreign key and check constraints) instead of triggers
    5. Avoid dynamic SQL statements as much as possible. [use sp_executesql  instead of EXEC statement if dynamic sql use is unavoidable]
    6. Use outer joins sparingly or where ever really necessary; inner joins are more efficient.
    7. Use TRUNCATE instead of DELETE if you need to clear a whole table (Not possible if referential integrity constraints is present and mostly applicable for staging tables).
    8. Instead of using SELECT COUNT(*) from <table_name> try using (In OLAP)
    SELECT rows FROM sysindexes  WHERE id = OBJECT_ID('<table_name>') AND indid < 2 
    [Good for OLAP systems; in small OLTP Databases there is no need to trust the sysindexes]
    1. Prefer using EXISTS over OR , IN operators. [exceptions apply for smaller and limited subsets]
    2. If you have “>= and <= “operators or in case of some “IN” operations try to see if BETWEEN  can be used as its more efficient.
    3. Where possible, avoid string concatenation, as it is not a fast process.
    4. If possible, try to avoid using data conversion functions in the WHERE clause.
    e.g. SELECT   * FROM Requests WHERE  CONVERT(VARCHAR(12),created_on,101)=CONVERT(VARCHAR(12),getdate()-360,101)
    1. A query with one or more OR clauses can sometimes be rewritten as a series of queries that are combined with a UNION ALL statement or using comma separated set (in SQL 2008 and above)
    2. Always use SQL-92  syntax instead of SQL-89

    1. Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
    2. Do not use HAVING clause for filtering on non-aggregate scenarios (use Where Clause)
    3. Try to avoid using SQL Server cursors, whenever possible.
    4. Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows.
    5. Try to use UNION ALL statement instead of UNION, whenever possible (union does remove duplicates hence its costly)
    6. Don't use the prefix "sp_" in the stored procedure name
    7. Use While loop instead of cursors.Also do not attempt recursive function calls as different SQL server had different default allowed recursion limits.
    All the points mentioned are valid for SQL 2005 and above. There are some non-performance related SQL best practices that you can try when you are facing the SQL engine next time.

    1. Minimize the use of NULLs in the result set
    Use ISNULL or COALESCE functions to deal with them
    1. All database objects should have a corresponding SQL script [Have schema scripts ready and stored in source control]
    2. Use upper case for all T-SQL key words.
    3. Give the prolog for  script/Stored Procedure with business logic
    4. Always Prefix DBO to the Stored Procedure name.(if indeed you are using default schema)
    5. Use user-defined functions to encapsulate code for reuse. (use this option carefully as it has the potential to create low performing query) 
    Lastly do not forget to learn and harness the real power of Common table expressions. It is a great way to write clean and high performing code