Jerrybrandyella's Profile




  • As far as I know, there is no such a work item "out of the box" feature to achieve it.

    This feature doesn’t support by both Hosted XML Process and Inheritance process.

    But, in addiiton, I think you can do this with writing some custom scripts.


    You can do it with WebHooks.

    1.Build a Api app (e.g. web api) to update work item per to parent work item through REST API.

    2.Create a Webhook with Work item updated or created event and set the field filter.

    3.Specify the Api app URL (step 1) and Webhook settings.


    Another tool named TFS Aggregator. You can set and configure apply certain rules (such as copying values from a child item to a parent item). And it will trigger after a work item is updated or created.

    I know that this is very inconvenient. You may try to submit a suggestion ticket about this feature in our UserVoice website.

    By the way, here is a suggestion ticket with the similar requirements. You also could vote and add comments in it.

    Hope this helps.

    • 1 answers
    • 0 votes
  • Asked on July 17, 2020 in Python.

    The xml is converted to dict and then the parsing logic is written , the reason for this is because the same can be used for json . The stackoverflow is amazingly helpful and the solution is build based on the responses from all these links . For simplicity i have created a 3 level nest xml. This works on Python3

    <?xml version="1.0"?><Company><Employee><FirstName>Hal</FirstName><LastName>Thanos</LastName><ContactNo>122131</ContactNo><Email></Email><Addresses><Address><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Iron</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email></Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Captain</FirstName><LastName>America</LastName><ContactNo>13322</ContactNo><Email></Email><Addresses><Address><City>Trivandrum</City><State>Kerala</State><Zip>28115</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Sword</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email></Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Thor</FirstName><LastName>Odison</LastName><ContactNo>156565</ContactNo><Email></Email><Addresses><Address><City>Tirunelveli</City><State>TamilNadu</State><Zip>36595</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Spider</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email></Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Black</FirstName><LastName>Widow</LastName><ContactNo>16767</ContactNo><Email></Email><Addresses><Address><City>Mysore</City><State>Karnataka</State><Zip>12478</Zip><forms><form><id>ID1</id><value>LIC</value></form></forms></Address></Addresses></Employee><Employee><FirstName>White</FirstName><LastName>Man</LastName><ContactNo>5634</ContactNo><Email></Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> 

    The config file for this xml is all possible array/multiple level/explode columns should be mentioned as []. The header is needed as referred in the code.

    Chnage the variable as per u store process_config_csv = ‘config.csv’ xml_file_name = ‘test.xml’

    XPATH,ColumName,CSV_File_Name /Company/Employee[]/FirstName,FirstName,Name.csv /Company/Employee[]/LastName,LastName,Name.csv /Company/Employee[]/ContactNo,ContactNo,Name.csv /Company/Employee[]/Email,Email,Name.csv /Company/Employee[]/FirstName,FirstName,Address.csv /Company/Employee[]/LastName,LastName,Address.csv /Company/Employee[]/ContactNo,ContactNo,Address.csv /Company/Employee[]/Email,Email,Address.csv /Company/Employee[]/Addresses/Address[]/City,City,Address.csv /Company/Employee[]/Addresses/Address[]/State,State,Address.csv /Company/Employee[]/Addresses/Address[]/Zip,Zip,Address.csv /Company/Employee[]/Addresses/Address[]/type,type,Address.csv /Company/Employee[]/FirstName,FirstName,Form.csv /Company/Employee[]/LastName,LastName,Form.csv /Company/Employee[]/ContactNo,ContactNo,Form.csv /Company/Employee[]/Email,Email,Form.csv /Company/Employee[]/Addresses/Address[]/type,type,Form.csv /Company/Employee[]/Addresses/Address[]/forms/form[]/id,id,Form.csv /Company/Employee[]/Addresses/Address[]/forms/form[]/value,value,Form.csv 

    The code to create multiple csv based on the config file is

    import json import xmltodict import json import os import csv import numpy as np import pandas as pd import sys from collections import defaultdict import numpy as np  def getMatches(L1, L2):     R = set()     for elm in L1:         for pat in L2:             if elm.find(pat) != -1:                 if elm.find('.', len(pat)+1) != -1:                     R.add(elm[:elm.find('.', len(pat)+1)])                 else:                     R.add(elm)     return list(R)  def xml_parse(xml_file_name):     try:         process_xml_file = xml_file_name         with open(process_xml_file) as xml_file:             for xml_string in xml_file:                 """Converting the xml to Dict"""                 data_dict = xmltodict.parse(xml_string)                 """Converting the dict to Pandas DF"""                 df_processing = pd.json_normalize(data_dict)                 xml_parse_loop(df_processing)             xml_file.close()     except Exception as e:         s = str(e)         print(s)  def xml_parse_loop(df_processing_input):     CSV_File_Name = []     """Getting the list of csv Files to be created"""     with open(process_config_csv, newline='') as csvfile:         DataCaptured = csv.DictReader(csvfile)         for row in DataCaptured:             if row['CSV_File_Name'] not in CSV_File_Name:                 CSV_File_Name.append(row['CSV_File_Name'])     """Iterating the list of CSV"""     for items in CSV_File_Name:             df_processing = df_processing_input             df_subset_process = []             df_subset_list_all_cols = []             df_process_sub_explode_Level = []             df_final_column_name = []             print('Parsing the xml file for creating the file - ' + str(items))             """Fetching the field list for processs from the confic File"""             with open(process_config_csv, newline='') as csvfile:                     DataCaptured = csv.DictReader(csvfile)                     for row in DataCaptured:                         if row['CSV_File_Name'] in items:                                 df_final_column_name.append(row['ColumName'])                                 """Getting the columns until the first [] """                                 df_subset_process.append(row['XPATH'].strip('/').replace("/",".").split('[]')[0])                                 """Getting the All the columnnames"""                                 df_subset_list_all_cols.append(row['XPATH'].strip('/').replace("/",".").replace("[]",""))                                 """Getting the All the Columns to explode"""                                 df_process_sub_explode_Level.append(row['XPATH'].strip('/').replace('/', '.').split('[]'))             explode_ld = defaultdict(set)             """Putting Level of explode and column names"""             for x in df_process_sub_explode_Level:                 if len(x) > 1:                     explode_ld[len(x) - 1].add(''.join(x[: -1]))             explode_ld = {k: list(v) for k, v in explode_ld.items()}             #print(' The All column list is for the file ' + items + " is " + str(df_subset_list_all_cols))             #print(' The first processing for the file ' + items + " is " + str(df_subset_process))             #print('The explode level of attributes for the file ' + items + " is " + str(explode_ld))             """Remove column duplciates"""             df_subset_process = list(dict.fromkeys(df_subset_process))             for col in df_subset_process:                 if col not in df_processing.columns:                     df_processing[col] = np.nan             df_processing = df_processing[df_subset_process]             df_processing_col_list = df_processing.columns.tolist()             print ('The total levels to be exploded : %d' % len(explode_ld))             i=0             level=len(explode_ld)             for i in range(level):                 print (' Exploding the Level : %d' % i )                 df_processing_col_list = df_processing.columns.tolist()                 list_of_explode=set(df_processing_col_list) & set(explode_ld[i + 1])                 #print('List to expolde' + str(list_of_explode))                 """If founc in explode list exlplode some xml doesnt need to have a list it could be column handling the same"""                 for c in list_of_explode:                     print (' There are column present which needs to be exploded - ' + str(c))                     df_processing = pd.concat((df_processing.iloc[[type(item) == list for item in df_processing[c]]].explode(c),df_processing.iloc[[type(item) != list for item in df_processing[c]]]))                     print(' Finding the columns need to be fetched ')                 """From the overall column list fecthing the attributes needed to explode"""                 next_level_pro_lst = getMatches(df_subset_list_all_cols,explode_ld[ i + 1 ])                 #print(next_level_pro_lst)                 df_processing_col_list = df_processing.columns.tolist()                 for nex in next_level_pro_lst:                     #print ("Fetching " + nex.rsplit('.', 1)[1] + ' from ' + nex.rsplit('.', 1)[0] + ' from ' + nex )                     parent_col=nex.rsplit('.', 1)[0]                     child_col=nex.rsplit('.', 1)[1]                     #print(parent_col)                     #print(df_processing_col_list)                     if parent_col not in df_processing_col_list:                         df_processing[nex.rsplit('.', 1)[0]] = ""                     try:                         df_processing[nex] = df_processing[parent_col].apply(lambda x: x.get(child_col))                     except AttributeError:                         df_processing[nex] = ""                 df_processing_col_list = df_processing.columns.tolist()                 if i == level-1:                     print('Last Level nothing to be done')                 else:                     """Extracting All columns until the next exlode column list is found"""                     while len(set(df_processing_col_list) & set(explode_ld[i + 2]))==0:                         next_level_pro_lst = getMatches(df_subset_list_all_cols, next_level_pro_lst)                         #print(next_level_pro_lst)                         for nextval in next_level_pro_lst:                             if nextval not in df_processing_col_list:                                 #print("Fetching " + nextval.rsplit('.', 1)[1] + ' from ' + nextval.rsplit('.', 1)[0] + ' from ' + nextval)                                 if nextval.rsplit('.', 1)[0] not in df_processing.columns:                                     df_processing[nextval.rsplit('.', 1)[0]] = ""                                 try:                                     df_processing[nextval] = df_processing[nextval.rsplit('.', 1)[0]].apply(lambda x: x.get(nextval.rsplit('.', 1)[1]))                                 except AttributeError:                                     df_processing[nextval] = ""                          df_processing_col_list = df_processing.columns.tolist()               df_processing = df_processing[df_subset_list_all_cols]             df_processing.columns = df_final_column_name             # if file does not exist write header             if not os.path.isfile(items):                 print("The file does not exists Exists so writing new")                 df_processing.to_csv('{}'.format(items), header='column_names',index=None)             else:  # else it exists so append without writing the header                 print("The file does exists Exists so appending")                 df_processing.to_csv('{}'.format(items), mode='a', header=False,index=None)   from datetime import datetime startTime ="%Y%m%d_%H%M%S") startTime = str(os.getpid()) + "_" + startTime process_task_name = '' process_config_csv = 'config.csv' xml_file_name = 'test.xml' old_print = print  def timestamped_print(*args, **kwargs):     now ="%Y-%m-%d %H:%M:%S.%f")     printheader = now + " xml_parser " + " " + process_task_name + " - "     old_print(printheader, *args, **kwargs) print = timestamped_print  xml_parse(xml_file_name) 

    The output created are

    [, ~]$ cat Name.csv FirstName,LastName,ContactNo,Email Hal,Thanos,122131, Iron,Man,12324, Captain,America,13322, Sword,Man,12324, Thor,Odison,156565, Spider,Man,12324, Black,Widow,16767, White,Man,5634, [, ~]$ cat Address.csv FirstName,LastName,ContactNo,Email,City,State,Zip,type Iron,Man,12324,,Bangalore,Karnataka,560212,Permanent Iron,Man,12324,,Concord,NC,28027,Temporary Hal,Thanos,122131,,Bangalore,Karnataka,560212, Sword,Man,12324,,Bangalore,Karnataka,560212,Permanent Sword,Man,12324,,Concord,NC,28027,Temporary Captain,America,13322,,Trivandrum,Kerala,28115, Spider,Man,12324,,Bangalore,Karnataka,560212,Permanent Spider,Man,12324,,Concord,NC,28027,Temporary Thor,Odison,156565,,Tirunelveli,TamilNadu,36595, White,Man,5634,,Bangalore,Karnataka,560212,Permanent White,Man,5634,,Concord,NC,28027,Temporary Black,Widow,16767,,Mysore,Karnataka,12478, [, ~]$ cat Form.csv FirstName,LastName,ContactNo,Email,type,id,value Iron,Man,12324,,Temporary,ID1,LIC Iron,Man,12324,,Temporary,ID2,PAS Iron,Man,12324,,Temporary,ID3,SSN Iron,Man,12324,,Temporary,ID2,CC Hal,Thanos,122131,,,ID1,LIC Hal,Thanos,122131,,,ID2,PAS Iron,Man,12324,,Permanent,ID3,LIC Sword,Man,12324,,Temporary,ID1,LIC Sword,Man,12324,,Temporary,ID2,PAS Sword,Man,12324,,Temporary,ID3,SSN Sword,Man,12324,,Temporary,ID2,CC Captain,America,13322,,,ID1,LIC Captain,America,13322,,,ID2,PAS Sword,Man,12324,,Permanent,ID3,LIC Spider,Man,12324,,Temporary,ID1,LIC Spider,Man,12324,,Temporary,ID2,PAS Spider,Man,12324,,Temporary,ID3,SSN Spider,Man,12324,,Temporary,ID2,CC Thor,Odison,156565,,,ID1,LIC Thor,Odison,156565,,,ID2,PAS Spider,Man,12324,,Permanent,ID3,LIC White,Man,5634,,Temporary,ID1,LIC White,Man,5634,,Temporary,ID2,PAS White,Man,5634,,Temporary,ID3,SSN White,Man,5634,,Temporary,ID2,CC White,Man,5634,,Permanent,ID3,LIC Black,Widow,16767,,,ID1,LIC 

    The pieces and answers are extracted from different threads and thanks to @Mark Tolonen @Mandy007 @deadshot

    Create a dict of list using python from csv

    How to explode Panda column with data having different dict and list of dict

    This can be definitely made shorter and more performing one and can be enhanced further

    • 4 answers
    • 0 votes
  • To convert kotlin data class to xml using fasterxml

    1. Ensure you add dependency on pom
            <dependency>             <groupId>com.fasterxml.jackson.dataformat</groupId>             <artifactId>jackson-dataformat-xml</artifactId>             <version>2.10.1</version>         </dependency> 
    1. On the data class add @field so that the @JacksonXmlProperty is not ignored
    @JacksonXmlRootElement(localName = "COMMAND") data class AirtelExpressRequest(         @field:JacksonXmlProperty(localName = "TYPE")         val type: String,          @field:JacksonXmlProperty(localName = "INTERFACEID")         val interfaceId: String,          @field:JacksonXmlProperty(localName = "MSISDN")         val msisdn: String,          @field:JacksonXmlProperty(localName = "MSISDN2")         val msisdn2: String,          @field:JacksonXmlProperty(localName = "AMOUNT")         val amount: Int,          @field:JacksonXmlProperty(localName = "MEMO")         val memo: String,          @field:JacksonXmlProperty(localName = "EXTTRID")         val externalTxnId: String,          @field:JacksonXmlProperty(localName = "MERCHANT_TXN_ID")         val merchantTxnId: String,          @field:JacksonXmlProperty(localName = "IS_TRANS_UNIQUE_CHECK_REQUIRED")         val isUnique: String = "Y",          @field:JacksonXmlProperty(localName = "REFERENCE")         val reference: String,          @field:JacksonXmlProperty(localName = "serviceType")         val serviceType: String,          @field:JacksonXmlProperty(localName = "USERNAME")         val username: String,          @field:JacksonXmlProperty(localName = "PASSWORD")         val password: String ) 
    1. Using XmlMapper you can then go ahead and serialize to the data class to xml
     val xmlMapper = XmlMapper(                 JacksonXmlModule().apply { setDefaultUseWrapper(false) }         ).apply {                 enable(SerializationFeature.INDENT_OUTPUT)         } val strObject = Request(                 type            = "MERCHPAY",                 interfaceId     = "DATABUNDLES",                 msisdn          = "733204938",                 msisdn2         = "100001929",                 amount          = 1_000,                 externalTxnId   = "07026984141550752666",                 merchantTxnId   = "07026984141550752666",                 reference       = "Testing transaction",                 memo            = "Enter the PIN for payment of 1000 to purchase testing transaction",                 serviceType     = "MERCHPAY",                 username        = "abcd",                 password        = "abcd123"         )         val xml = xmlMapper.writeValueAsString(strObject) 
    1. Output
    <COMMAND>   <TYPE>MERCHPAY</TYPE>   <INTERFACEID>DATABUNDLES</INTERFACEID>   <MSISDN>733204938</MSISDN>   <MSISDN2>100001929</MSISDN2>   <AMOUNT>1000</AMOUNT>   <MEMO>Enter the PIN for payment of 1000 to purchase testing transaction</MEMO>   <EXTTRID>07026984141550752666</EXTTRID>   <MERCHANT_TXN_ID>07026984141550752666</MERCHANT_TXN_ID>   <IS_TRANS_UNIQUE_CHECK_REQUIRED>Y</IS_TRANS_UNIQUE_CHECK_REQUIRED>   <REFERENCE>Testing transaction</REFERENCE>   <serviceType>MERCHPAY</serviceType>   <USERNAME>abcd</USERNAME>   <PASSWORD>abcd123</PASSWORD> </COMMAND> 
    • 1 answers
    • 0 votes
  • Asked on July 16, 2020 in Python.

    the bytes must first be turned into a string:

    string = etree.tostring(tree, pretty_print=True).decode("utf-8") #decode will convert bytes into string 
    • 2 answers
    • 0 votes
  • Asked on July 16, 2020 in .NET.

    An LDAP path consists of either:

    1. The server to connect to, which can be the domain DNS name (, or a specific DC (, or
    2. The distinguished name you want to bind to, like DC=example,DC=com)

    or both.

    If your computer is joined to the same domain that you’re trying to connect to, or if your computer is joined to a trusted domain, then you don’t need to include the server name. Your computer knows about the domain, so the distinguished name is enough for it to figure it out.

    But if your computer is not joined to the same or trusted domain, then you must include the server name since your computer has no idea where to go for that domain.

    It sounds like you will need to include the server name. If the distinguished name is DC=example,DC=com then the domain DNS name is, and that’s what you use as the server. (each "DC" part is a "domain component", which you can join together with dots in between)

    That LDAP path would look something like this:


    You only need to include the distinguished name if you want to bind to something below the root of the domain. So this:


    Is equivalent to this:


    But if you wanted to search only one specific OU, then you must include the distinguished name of that OU:

    • 1 answers
    • 0 votes
  • TL;DR: Try using Html.Partial instead of Renderpage

    I was getting Object reference not set to an instance of an object when I tried to render a View within a View by sending it a Model, like this:

    @{     MyEntity M = new MyEntity(); } @RenderPage("_MyOtherView.cshtml", M); // error in _MyOtherView, the Model was Null 

    Debugging showed the model was Null inside MyOtherView. Until I changed it to:

    @{     MyEntity M = new MyEntity(); } @Html.Partial("_MyOtherView.cshtml", M); 

    And it worked.

    Furthermore, the reason I didn’t have Html.Partial to begin with was because Visual Studio sometimes throws error-looking squiggly lines under Html.Partial if it’s inside a differently constructed foreach loop, even though it’s not really an error:

    @inherits System.Web.Mvc.WebViewPage @{     ViewBag.Title = "Entity Index";     List<MyEntity> MyEntities = new List<MyEntity>();     MyEntities.Add(new MyEntity());     MyEntities.Add(new MyEntity());     MyEntities.Add(new MyEntity()); } <div>     @{         foreach(var M in MyEntities)         {             // Squiggly lines below. Hovering says: cannot convert method group 'partial' to non-delegate type Object, did you intend to envoke the Method?             @Html.Partial("MyOtherView.cshtml");         }     } </div> 

    But I was able to run the application with no problems with this “error”. I was able to get rid of the error by changing the structure of the foreach loop to look like this:

    @foreach(var M in MyEntities){     ... } 

    Although I have a feeling it was because Visual Studio was misreading the ampersands and brackets.

    • 28 answers
    • 0 votes
  • The reason that you’re receiving a null reference exception is due to this line in the Bill class:

    public List<BillLine> LineItems { get; set; }

    It’s currently not initializing the list. The simplest way to resolve would be to do:

    public List<BillLine> LineItems { get; set; } = new List<BillLine>();

    • 3 answers
    • 0 votes
  • Asked on July 16, 2020 in Mysql.

    There is no "protected" column concept, no. You would need to use triggers.

    • 1 answers
    • 0 votes
  • You are not using bind_result() properly.

    Binds columns in the result set to variables.

    You are trying to bind the entire result set into a single variable. You need to provide a variable for each column in the result set.


    Here is where it fits in:

    $sql="SELECT `topic`,`detail`,`email`,`name`,`datetime` FROM `$safe_tbl_name` WHERE id=?"; if($stmt=$con->prepare($sql)){     $stmt->bind_param("s",$id);     $stmt->execute();     $stmt->bind_result($topic,$detail,$email,$name,$datetime);     //while($stmt->fetch()){  not wrong, but not necessary to loop if only one row     $stmt->fetch();           echo "<table width=\"400\" border=\"0\" align=\"center\" cellpadding=\"0\" cellspacing=\"1\" bgcolor=\"#CCCCCC\">";             echo "<tr>";                 echo "<td>";                     echo "<table width=\"100%\" border=\"0\" cellpadding=\"3\" cellspacing=\"1\" bordercolor=\"1\" bgcolor=\"#FFFFFF\">";                         echo "<tr>";                             echo "<td bgcolor=\"#F8F7F1\"><strong>$topic</strong></td>";                         echo "</tr>";                         echo "<tr>";                             echo "<td bgcolor=\"#F8F7F1\">$detail</td>";                         echo "</tr>";                         echo "<tr>";                             echo "<td bgcolor=\"#F8F7F1\"><strong>By :</strong>$name<strong>Email : </strong>$email</td>";                         echo "</tr>";                         echo "<tr>";                             echo "<td bgcolor=\"#F8F7F1\"><strong>Date/time : </strong>$datetime</td>";                         echo "</tr>";                     echo "</table>";                 echo "</td>";             echo "</tr>";         echo "</table>";     //}     $stmt->close(); } 

    Alternatively, if you want to use the * in your SELECT, you could try the following non-bind_result method. (all examples that I have read online only use bind_result when not using * in the SELECT.

    if($stmt->execute()){     $result=$stmt->get_result();     $rows[]=$result->fetch_assoc(); }else{     echo "execute failed";  // but I don't think this is your problem } // $rows['topic'] // $rows['detail'] // $rows['email'] // $rows['name'] // $rows['datetime'] 
    • 1 answers
    • 0 votes
  • Asked on July 16, 2020 in Mysql.

    Your local database server is failing to start up, and it’s being started with enough layers between you and it that any reporting it’s doing on why is being hidden. To attempt to start it while being able to see what it’s complaining about, try the methodology here: start MySQL server from command line on Mac OS Lion

    • 1 answers
    • 0 votes