Is It Wasteful to Create a New Database Table Instead of Using Enum Data Type?

Option #2, using reference tables, is the standard way of doing it. It has been used by millions of programmers, and is known to work. It is a pattern, so anyone else looking at your stuff will immediately know what is going on. There exist libraries and tools that work on databases, saving you from lots and lots of work, that will handle it correctly. The benefits of using it are innumerable.Is it wasteful? Yes, but only slightly. Any half-decent database will always keep such frequently joined small tables cached, so the waste is generally imperceptible.

All other options that you described are ad hoc and hacky, including MySQL's enum, because it is not part of the SQL standard. (Other than that, what sucks with enum is MySQL's implementation, not the idea itself. I would not mind seeing it one day as part of the standard.

)Your final option #3 with using a plain integer is especially hacky. You get the worst of all worlds: no referential integrity, no named values, no definitive knowledge within the database of what a value stands for, just arbitrary integers thrown all over the place. By this token, you might as well quit using constants in your code, and start using hard-coded values instead. circumference radius * 6.

28318530718;. How about that?I think you should re-examine why you find reference tables onerous. Nobody else finds them onerous, as far as I know. Could it be that it is because you are not using the right tools for the job? Your sentence about having to "encode things and deal with integers", or having to "create elaborate programming constructs", or "creating new object oriented entities on the programming side", tells me that perhaps you may be attempting to do object-relational mapping (ORM) on the fly dispersed throughout the code of your application, or in the best case you may be trying to roll your own object-relational mapping mechanism, instead of using an existing ORM tool for the job, such as Hibernate. All these things are a breeze with Hibernate. It takes a little while to learn it, but once you have learned it, you can really focus on developing your application and forget about the nitty gritty mechanics of how to represent stuff on the database.Finally, if you want to make your life easier when working directly with the database, there are at least two things that you can do, that I can think of right now:Create views that join your main tables with whatever reference tables they reference, so that each row contains not only the reference ids, but also the corresponding names.

Instead of using an integer id for the reference table, use a CHAR(4) column, with 4-letter abbreviations. So, the ids of your categories would become "TEST", "DSGN", "PROG", "OTHR". (Their descriptions would remain proper English words, of course.

) It will be a bit slower, but trust me, nobody will notice.Finally, when there are only two types, most people just use a boolean column. So, that "standard/exception" column would be implemented as a boolean and it would be called "IsException".

Suppose I have 4 types of services I offer (they are unlikely to change often):

Testing

Design

Programming

Other

Suppose I have 60-80 of actual services that each fall into one of the above categories. For example, 'a service' can be "Test Program using technique A" and it is of type "Testing".

I want to encode them into a database. I came up with a few options:

Option 0:

Use VARCHAR directly to encode service type directly as a string

Option 1:

Use database enum. But, enum is evil

Option 2:

use two tables:

service_line_item (id, service_type_id INT, description VARCHAR);

service_type (id, service_type VARCHAR);

I can even enjoy referential integrity:

ALTER service_line_item ADD FOREIGN KEY (service_type_id) REFERENCES service_type (id);

Sounds good, yes?

But I still have to encode things and deal with integers, i.e when populating the table. Or I have to create elaborate programming or DB constructs when populating or dealing with the table. Namely, JOINs when dealing with the database directly, or creating new object oriented entities on the programming side, and making sure I operate them correctly.

Option 3:

Don't use enum, do not use two tables, but just use an integer column

service_line_item ( id, service_type INT, -- use 0, 1, 2, 3 (for service types) description VARCHAR

);

This is like a 'fake enum' that requires more overhead on the code side of things, like i.e. knowing that 2 'Programming' and dealing with it appropriately.

Question:

Currently I have implemented it using Option 2, guided under concepts

do not use enum (option 1)

avoid using a database as a spreadsheet (option 0)

But I can't help to feel that seems wasteful to me in terms of programming and cognitive overhead -- I have to be aware of two tables, and deal with two tables, vs one.

For a 'less wasteful way', I am looking at Option 3. IT is lighter and requires essentially the same code constructs to operate (with slight modifications but complexity and structure is basically the same but with a single table)

I suppose ideally it is not always wasteful, and there are good cases for either option, but is there a good guideline as to when one should use Option 2 and when Option 3?

When there are only two types (binary)

To add a bit more to this question... in the same venue, I have a binary option of "Standard" or "Exception" Service, which can apply to the service line item. I have encoded that using Option 3.

I chose not to create a new table just to hold values "Standard", "Exception". So my column just holds 0, 1 and my column name is called exception, and my code is doing a translation from 0, 1 > STANDARD, EXCEPTION (which I encoded as constants in programming language)

So far not liking that way either..... (not liking option 2 nor option 3).

I do find option 2 superior to 3, but with more overhead, and still I cannot escape encoding things as integers no matter which option I use out of 2, and 3.

ORM

To add some context, after reading answers - I have just started using an ORM again (recently), in my case Doctrine 2. After defining DB schema via Annotations, I wanted to populate the database. Since my entire data set is relatively small, I wanted to try using programming constructs to see how it works.

I first populated service_types, and then service_line_items, as there was an existing list from an actual spreadsheet. So things like 'standard/exception' and 'Testing' are all strings on the spreadsheet, and they have to be encoded into proper types before storing them in DB.

I found this SO answer: What do you use instead of ENUM in doctrine2?,

which suggested to not use DB's enum construct, but to use an INT field and to encode the types using 'const' construct of the programming language.

But as pointed out in the above SO question, I can avoid using integers directly and use language constructs -- constants -- once they are defined....

But still .... no matter how you turn it, if I am starting with string as a type, I have to first convert it to a proper type, even when using an ORM.

So if say $str 'Testing';, I still need to have a block somewhere that does something like:

switch($str):

case 'Testing': $type MyEntity::TESTING; break; case 'Other': $type MyEntity::OTHER; break;

The good thing is you are not dealing with integers/magic numbers [instead, dealing with encoded constant quantities], but the bad thing is you can't auto-magically pull things in and out of the database without this conversion step, to my knowledge.

And that's what I meant, in part, by saying things like "still have to encode things and deal with integers". (Granted, now, after Ocramius' comment, I won't have to deal directly with integers, but deal with named constants and some conversion to/from constants, as needed).

·OTHER ANSWER:

Suppose I have 4 types of services I offer (they are unlikely to change often):

Testing

Design

Programming

Other

Suppose I have 60-80 of actual services that each fall into one of the above categories. For example, 'a service' can be "Test Program using technique A" and it is of type "Testing".

I want to encode them into a database. I came up with a few options:

Option 0:

Use VARCHAR directly to encode service type directly as a string

Option 1:

Use database enum. But, enum is evil

Option 2:

use two tables:

service_line_item (id, service_type_id INT, description VARCHAR);

service_type (id, service_type VARCHAR);

I can even enjoy referential integrity:

ALTER service_line_item ADD FOREIGN KEY (service_type_id) REFERENCES service_type (id);

Sounds good, yes?

But I still have to encode things and deal with integers, i.e when populating the table. Or I have to create elaborate programming or DB constructs when populating or dealing with the table. Namely, JOINs when dealing with the database directly, or creating new object oriented entities on the programming side, and making sure I operate them correctly.

Option 3:

Don't use enum, do not use two tables, but just use an integer column

service_line_item ( id, service_type INT, -- use 0, 1, 2, 3 (for service types) description VARCHAR

);

This is like a 'fake enum' that requires more overhead on the code side of things, like i.e. knowing that 2 'Programming' and dealing with it appropriately.

Question:

Currently I have implemented it using Option 2, guided under concepts

do not use enum (option 1)

avoid using a database as a spreadsheet (option 0)

But I can't help to feel that seems wasteful to me in terms of programming and cognitive overhead -- I have to be aware of two tables, and deal with two tables, vs one.

For a 'less wasteful way', I am looking at Option 3. IT is lighter and requires essentially the same code constructs to operate (with slight modifications but complexity and structure is basically the same but with a single table)

I suppose ideally it is not always wasteful, and there are good cases for either option, but is there a good guideline as to when one should use Option 2 and when Option 3?

When there are only two types (binary)

To add a bit more to this question... in the same venue, I have a binary option of "Standard" or "Exception" Service, which can apply to the service line item. I have encoded that using Option 3.

I chose not to create a new table just to hold values "Standard", "Exception". So my column just holds 0, 1 and my column name is called exception, and my code is doing a translation from 0, 1 > STANDARD, EXCEPTION (which I encoded as constants in programming language)

So far not liking that way either..... (not liking option 2 nor option 3).

I do find option 2 superior to 3, but with more overhead, and still I cannot escape encoding things as integers no matter which option I use out of 2, and 3.

ORM

To add some context, after reading answers - I have just started using an ORM again (recently), in my case Doctrine 2. After defining DB schema via Annotations, I wanted to populate the database. Since my entire data set is relatively small, I wanted to try using programming constructs to see how it works.

I first populated service_types, and then service_line_items, as there was an existing list from an actual spreadsheet. So things like 'standard/exception' and 'Testing' are all strings on the spreadsheet, and they have to be encoded into proper types before storing them in DB.

I found this SO answer: What do you use instead of ENUM in doctrine2?,

which suggested to not use DB's enum construct, but to use an INT field and to encode the types using 'const' construct of the programming language.

But as pointed out in the above SO question, I can avoid using integers directly and use language constructs -- constants -- once they are defined....

But still .... no matter how you turn it, if I am starting with string as a type, I have to first convert it to a proper type, even when using an ORM.

So if say $str 'Testing';, I still need to have a block somewhere that does something like:

switch($str):

case 'Testing': $type MyEntity::TESTING; break; case 'Other': $type MyEntity::OTHER; break;

The good thing is you are not dealing with integers/magic numbers [instead, dealing with encoded constant quantities], but the bad thing is you can't auto-magically pull things in and out of the database without this conversion step, to my knowledge.

And that's what I meant, in part, by saying things like "still have to encode things and deal with integers". (Granted, now, after Ocramius' comment, I won't have to deal directly with integers, but deal with named constants and some conversion to/from constants, as needed).

GET IN TOUCH WITH US
مقالات مقترحة
How to Implement Wait Using UI Automation
How to Implement Wait Using UI Automation
There might be better ways (code-wise), but avoiding Thread.Sleep can be easily done by using SpinWait.SpinUntil which is in the System.Threading namespace.It will loop until either true, or the set timeout has passed (then the code execution simply continues).An idea for your implementation in half pseudo-code:Edit: also, if you look at Selenium source code, the Wait.Until method also essentially is a loop using Thread.Sleep but a very brief one. So it's dynamic in the sense that the condition is checked often and the loop breaks when the condition has been met. So you could implement a similar thing (even very basic with a counter in a do-while will suffice)I'm currently creating automated test cases for a WPF application using Microsoft UIAutomation (UIA) framework. I'm able to locate objects/elements using AutomationElement class but sometimes application take more time to load and this fails my test cases, because test cases are trying to click an object which is not yet visible on the screen.I thought of implementing implicit wait in my test cases (don't want to use Thread.Sleep) where my test cases will first wait for the object to appear and then perform action. But unfortunately I'm not able to find any way of implementing implicit wait.How can I implement Implicit Wait in my test cases using UIA?·OTHER ANSWER:I'm currently creating automated test cases for a WPF application using Microsoft UIAutomation (UIA) framework. I'm able to locate objects/elements using AutomationElement class but sometimes application take more time to load and this fails my test cases, because test cases are trying to click an object which is not yet visible on the screen.I thought of implementing implicit wait in my test cases (don't want to use Thread.Sleep) where my test cases will first wait for the object to appear and then perform action. But unfortunately I'm not able to find any way of implementing implicit wait.How can I implement Implicit Wait in my test cases using UIA?
Air Gapped Digital Communications Mode on VHF?
Air Gapped Digital Communications Mode on VHF?
To add to the answer already given:This would require some experimentation, but you could run an audio modem on each of the computers (one sender, one receiver) then TX/RX the audio from A to B.An example of such audio modem is hereFirst test is to take the two computes side by side, line up speaker of sender to mic of receiver, and test such package over a simple air-gap without any TX/RX, the following from the documentation:CalibrationConnect the audio cable between the sender and the receiver, and run the following scripts:On the sender's side:On thereceiver's side:If BITRATE is not set, the MODEM will use 1 kbps settings (single frequency with BPSK modulation).Change the sender computer's output audio level, until all frequencies are received well:Various test modes of this mentioned software exists, to test for SNR, example:You can test without TX/RX, then add this in once the you are sucsesfully transferring info over the air-gap.HTH, Edwin.According to this question MCW (modulated continuous wave) is allowed on large portions of the VHF bands in the U.S.Is using MCW suitable for passing small amounts of digital data between 2 PCs or mobile phones using just air-gapped audio (speaker and mic, no wired connections or acoustic couplers) with generic handheld VHF HT radios? Are there other standard modes more suitable for this purpose?If using MCW, what encoding methods (arbitrary digital data to hex characters in Morse Code?) might be legal in the U.S.?·OTHER ANSWER:According to this question MCW (modulated continuous wave) is allowed on large portions of the VHF bands in the U.S.Is using MCW suitable for passing small amounts of digital data between 2 PCs or mobile phones using just air-gapped audio (speaker and mic, no wired connections or acoustic couplers) with generic handheld VHF HT radios? Are there other standard modes more suitable for this purpose?If using MCW, what encoding methods (arbitrary digital data to hex characters in Morse Code?) might be legal in the U.S.?
Example of Writing Unit Test for a Method
Example of Writing Unit Test for a Method
Despite the other answers to this question, there is in fact a lot of behaviour in this method, and therefore a lot of things that could go wrong if we don't test them. Here are the cases I would check:Note that there are different kinds of empty, which you should all test. There is a bit of a combinatorial explosion here, so you can either loop over an array of all empty values (preferred) or pick a representative empty value for the primary tests listed above, and then add a few additional tests to check that other empty values work as well, without aiming for full coverage.Together, these cases form a kind of specification for the method. If you refactor the method or change a related part of the system, these tests will alert you when the specification was violated.Of course, we don't want to make actual API calls for these tests after all, we are only testing that our code is correct, not that the API works as expected. It would therefore be wise to use a mock object for these tests, and to use object members rather than global variables (i.e. use some kind of dependency injection)I am writing unit tests for an iOS application. I clearly understand the benefits of writing unit tests & TDD, but I am confused about what kind of tests you can write for methods like this;-(void)setCurrentView:(NSString *)view data:(NSString *)data if (!isEmpty(view)) [Crashlytics setObjectValue:view forKey:kCurrentView]; if (!isEmpty(data)) [Crashlytics setObjectValue:data forKey:kCurrentViewData]; It is written in objective C & Crashlytics is a third part API.We can pass NULL & nil values & test that it doesn't throw any exceptionOther than this we can test that the values are set properly (isEqual)Any other test cases we can use here..UpdateThe above method doesn't work as excepted. Calling class method "setObjectValue" is not sending anything to Crashlytics dashboard. I have to call the the instance method in Crashlytics to make it work..[[Crashlytics sharedInstance] setObjectValue:view forKey:kCurrentView];·OTHER ANSWER:I am writing unit tests for an iOS application. I clearly understand the benefits of writing unit tests & TDD, but I am confused about what kind of tests you can write for methods like this;-(void)setCurrentView:(NSString *)view data:(NSString *)data if (!isEmpty(view)) [Crashlytics setObjectValue:view forKey:kCurrentView]; if (!isEmpty(data)) [Crashlytics setObjectValue:data forKey:kCurrentViewData]; It is written in objective C & Crashlytics is a third part API.We can pass NULL & nil values & test that it doesn't throw any exceptionOther than this we can test that the values are set properly (isEqual)Any other test cases we can use here..UpdateThe above method doesn't work as excepted. Calling class method "setObjectValue" is not sending anything to Crashlytics dashboard. I have to call the the instance method in Crashlytics to make it work..[[Crashlytics sharedInstance] setObjectValue:view forKey:kCurrentView];
How to Train a Machine Learning Model for Blocked Data
How to Train a Machine Learning Model for Blocked Data
Suppose I take a part of the data as validation data, which contains whole blocks. If I split the remaining rows randomly in training and test data, the accuracy of the learned Random Forest is very high on training and test data but very low on the validation data. This is due to the fact that the Random Forest learns in this case to identify the individual blocks and as the validation data contains some unknown blocks, the accuracy drops. This can be also seen in the resulting importance for the features: the most important ones are those features that allow an identification of a cycle very easily.Splitting the remaining data into training and test data and leaving the blocks whole helps a bit but doesn't fix the problem completely.The latter approach is better; at least your test set will be more representative of your desired use-case, and so the scores will be more relevant. sklearn has this built-in, with GroupKFold.Another approach would be to remove all features that allow the identification of blocks. But this is difficult a priori and some of these features could have important information for my problem.Indeed, this seems problematic. There's interesting related work, "domain adaptive neural networks," which essentially try to simultaneously learn the predictive trends while unlearning the block-specific information; but I'm not sure how relevant that is here, or whether there are similar non-NN approaches.A simple way to overcome the problems would be to take the row mean in each block and use the resulting data for the classification problem. However, in this case you loose a lot of information.You could try to extract other relevant features from the blocks. This could work especially well if you have some domain knowledge to guide the feature engineering.In general, if the blocks are associated with a label each (rather than each row having its own label), you're dealing with "Multiple Instance Learning." How to deal with that depends on the specifics of how the blocks are generated. The wikipedia page, especially the sections Assumptions and Algorithms, is a good place to startI'm concerned with a supervised classification problem for the following type of data. The data consists of $N$ rows (where $N$ is not very large - this is not a big-data problem) and $M$ columns (features) and each row has a certain label I'm interested in. Every row belongs to a certain block and each block consists of 1-50 rows. The size of a block depends on the duration of a block (so the rows in each block are correlated, but the correlation between different blocks can be neglected).The aim is to learn a classification algorithm on the data that allows to classify a new row or a new block.Now there are two important things: The labels are constant within each block and there are some features that allow the identification of blocks.My question is: What might be the best way to learn a model on this specific type of data?To illustrate this problem a bit further, let me report some problems I found when I worked with Random Forests on the data set.Suppose I take a part of the data as validation data, which contains whole blocks. If I split the remaining rows randomly in training and test data, the accuracy of the learned Random Forest is very high on training and test data but very low on the validation data. This is due to the fact that the Random Forest learns in this case to identify the individual blocks and as the validation data contains some unknown blocks, the accuracy drops. This can be also seen in the resulting importance for the features: the most important ones are those features that allow an identification of a cycle very easily.Splitting the remaining data into training and test data and leaving the blocks whole helps a bit but doesn't fix the problem completely.Another approach would be to remove all features that allow the identification of blocks. But this is difficult a priori and some of these features could have important information for my problem.A simple way to overcome the problems would be to take the row mean in each block and use the resulting data for the classification problem. However, in this case you loose a lot of information.So I'm wondering if there are more natural ways to approach this classification problem which respect the block-structure of the data.·OTHER ANSWER:I'm concerned with a supervised classification problem for the following type of data. The data consists of $N$ rows (where $N$ is not very large - this is not a big-data problem) and $M$ columns (features) and each row has a certain label I'm interested in. Every row belongs to a certain block and each block consists of 1-50 rows. The size of a block depends on the duration of a block (so the rows in each block are correlated, but the correlation between different blocks can be neglected).The aim is to learn a classification algorithm on the data that allows to classify a new row or a new block.Now there are two important things: The labels are constant within each block and there are some features that allow the identification of blocks.My question is: What might be the best way to learn a model on this specific type of data?To illustrate this problem a bit further, let me report some problems I found when I worked with Random Forests on the data set.Suppose I take a part of the data as validation data, which contains whole blocks. If I split the remaining rows randomly in training and test data, the accuracy of the learned Random Forest is very high on training and test data but very low on the validation data. This is due to the fact that the Random Forest learns in this case to identify the individual blocks and as the validation data contains some unknown blocks, the accuracy drops. This can be also seen in the resulting importance for the features: the most important ones are those features that allow an identification of a cycle very easily.Splitting the remaining data into training and test data and leaving the blocks whole helps a bit but doesn't fix the problem completely.Another approach would be to remove all features that allow the identification of blocks. But this is difficult a priori and some of these features could have important information for my problem.A simple way to overcome the problems would be to take the row mean in each block and use the resulting data for the classification problem. However, in this case you loose a lot of information.So I'm wondering if there are more natural ways to approach this classification problem which respect the block-structure of the data.
What Statistical Tests Can Be Used to Check for Lexicalisation Effects in a Judgement Task?
What Statistical Tests Can Be Used to Check for Lexicalisation Effects in a Judgement Task?
Likert scales are usually analysed with the Mann-Whitney U test or a Chi square test. T-tests are fine if the test assumptions are met, but they are not always (which is why people use the MWU which has different assumptions regarding normal distribution of means).In your case it seems like it might be difficult to establish this, so I'd go with the MWU (if you can establish them though, you should feel free to use a t-test). The test can show that it is likely that the two groups (e.g. transitive and intransitive) behave differently and it may exclude to a degree that this difference is due to chance. No test can however show that they are in fact different though, and more importantly that the underlying cognitive division you may assume is indeed the driver of that difference. Convincing yourself and your reader of those things is ultimately up to your arguments for the experimental choices and interpretations you make.In a run-of-the-mill judgement rating task where participants have to rate sentences on a Likert scale (e.g. 1 to 6) and that is constructed using a Latin square design, what statistical tests can be used to check for lexicalisation effects and which is the most commonly used?Update:By lexicalisation, I mean checking the different realisations of each condition. So if there's a variable 'transitivity' giving two conditions, i.e. transitive and intransitive, I want to test whether any of the choices of transitive verbs behave in a different way to the other choices of verbs in that condition. Can it be shown just using t-tests?·OTHER ANSWER:In a run-of-the-mill judgement rating task where participants have to rate sentences on a Likert scale (e.g. 1 to 6) and that is constructed using a Latin square design, what statistical tests can be used to check for lexicalisation effects and which is the most commonly used?Update:By lexicalisation, I mean checking the different realisations of each condition. So if there's a variable 'transitivity' giving two conditions, i.e. transitive and intransitive, I want to test whether any of the choices of transitive verbs behave in a different way to the other choices of verbs in that condition. Can it be shown just using t-tests?
Test Classes Failing Due to Default Custom Pricebook
Test Classes Failing Due to Default Custom Pricebook
Honestly, if you want your unit tests to be durable against Process Builders, you need to be able to turn the Process Builders off or control their behaviour. You can put a "kill switch" at the top of the Process Builder that stops the whole thing if some particular condition is true, e.g. a custom setting that you only set in a Test context, or the presence of a particularly unlikely string in the Opportunity name ("BypassFlowPlease"), etc.Whether or not your PBs should be active for various unit tests is hotly debatable. On one hand you want them to represent the way your org actually operates. On the other hand you want to be able to test individual components in a decoupled manner. Question: You wrote "Our admins developed and activated a process builder in Production to set a default custom Price Book on opportunity during the opportunity creation process." However you have shown us a Process that runs on Opportunity Product, not Opportunity. How does this Process Builder on Opportunity reference that Price Book? If it's via a hard-coded ID, of course it's going to gum up unit tests where that ID doesn't exist. If it's via a Custom Setting, then your unit test can control that behaviour by inserting the Custom Setting first before doing any DML that will invoke Process BuilderOur admins developed and activated a process builder in Production to set a default custom pricebook on opportunity during the opportunity creation process. Due to this, several of our test classes are failing. In our test classes, we are inserting products with the name "Test Product 1" etc and creating opportunity using these products. However, the test classes are failing with the following error message.System.DmlException: Insert failed. First exception on row 0; first error:FIELD_INTEGRITY_EXCEPTION, field integrity exception: unknown (pricebookentry is in a different pricebook than the one assigned to the opportunity):[unknown]Stack Trace: Class.TestOpportunity1.testMethod_Execute: line 115, column 1It appears that the Opportunity in test class is associated with the Standard pricebook and the process builder is trying to associate the Opportunity with the custom default pricebook and hence it fails.In the test class, if I change the pricebookEntryId on the Opportunity to the custom pricebook, I get a different message saying the Product is not defined in Standard Pricebook.Any solution?Below is the sample code. Pricebook2 Pb [select id, name, isActive from Pricebook2 where IsStandard true limit 1]; Product_Family__c pf new Product_Family__c(Name 'Collateral Protection',Product_Family__c'Collateral Protection'); insert pf; Product2 p new Product2 (Name'Test Product Entry 1',Product_Family__cpf.Id,Description'Test Product Entry 1',productCode '501', isActive true); insert p; Campaign_Product__c cp new Campaign_Product__c(); cp.Campaign__c c.Id; cp.Product__c p.Id; insert cp; PricebookEntry pbe1 new PricebookEntry (Product2IDp.id,Pricebook2IDPb.id,UnitPrice50, isActivetrue); insert pbe1; Opportunity o1 new Opportunity(AccountId a.Id, Name 'TEST', StageName '1 - Discover', CloseDate Date.today(), DoNotPursue__cfalse, Take_Action_Date__c Date.today()); o1.Product__c p.id; o1.Deal_Create_Date__c date.today(); o1.RecordTypeId [SELECT Id FROM RecordType WHERE Name 'Deal' AND IsActivetrue limit 1].Id ; o1.CampaignId c.Id; insert o1; OpportunityLineItem oli new OpportunityLineItem (OpportunityIDo1.id, Expected_Implementation_Date__c Date.today(),PricebookEntryIdpbe1.id); oli.PricebookEntryIdpbe1.id; insert oli; string query 'o1.Id' CampaignBatch cb new CampaignBatch(query); Test.startTest(); Database.executeBatch(cb,200); Test.stopTest();EDIT: Thanks for the responses.I already tried inserting the standard pricebook and in that case I am getting a flow error with the message "An unhandled flow error has occured" or something like that. We have a flow at the Opportunity Product level which assigns the Product Name to a "Product Selected" field when the Product2Id on the Opportunity Product is not null. I get an email about the flow exception, but there is no useful information in the email. Below is the contents of the email.Subject: Sandbox: Error Occurred During Flow "Populate_Product_Selected":The flow failed to access the value for myVariable...Error element myDecision (FlowDecision).The flow failed to access the value for myVariable_current.Product2.Idbecause it hasn't been set or assigned.This report lists the elements that the flow interview executed. Thereport is a beta feature.We welcome your feedback on IdeaExchange.Flow DetailsFlow Name: Populate_Product_SelectedType: Record Change ProcessVersion: 4Status: ActiveFlow Interview DetailsInterview Label:Current User: (005xyaxz)Start time: 2/22/2018 2:50 PMDuration: 0 secondsHow the Interview Started (005xyaxz) started the flow interview.Some of this flow's variables were set when the interview started.myVariable_old nullmyVariable_current 00k4FXYALKDSDxaASSIGNMENT: myVariable_waitStartTimeAssignment!myVariable_waitStartTimeVariable Equals !Flow.CurrentDateTimeResult!myVariable_waitStartTimeVariable "2/22/2018 2:50 PM"Salesforce Error ID: 28133274-19899 (540950916)Below is the screenshot of the Process Builder Criteria.I even looked at the debug logs during the flow failure and all I could see is the message "An unhandled flow event has occurred". It appears that the flow is not finding the value for Product2Id during the Opportunity Product insert from the test class.Any thoughts?EDIT: Finally I ended up making a change to the trigger instead of process builder. It appears that the process builder flow seems to be firing before even the insert opportunitylineitem statement completes the execution. As a result, when the flow executes, the Product2 field on the opportunity product seems to be null and hence the flow fails the above message. Again, this is just a theory. I tried very hard to get rid of this error message but to no avail and I had to change my design from Process builder to trigger.
I Have 2 AUCs From the Same Data but 2 Algorithms. How I Determine If One of the AUCs Is Greater in
I Have 2 AUCs From the Same Data but 2 Algorithms. How I Determine If One of the AUCs Is Greater in
I used the wilcoxon ranked sum test to solve the problem. It is designed to determine if one scoring yields better results than another when the samples are paired. Additionally, I used the Bonferonni correction since I did multiple comparisons.I computed many AUCs by using bootstrapping and fed the results to the wilcoxon ranked sum test.https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ranksums.htmlProblem:I have 200k data samples which are class imblanced (10% positive class, 90% negative class). I split the data in exactly half so my training set is 100k samples and my test set os 100k samples. I train algorithms A and B.Algorithm A discriminates between the two classes using the test set and it achieves an AUC of AUC_A. Algorithm B, which is an improvement on A, gets AUC we call AUC_B. I want to determine if AUC_B > AUC_A by chance or not (statistically significant).What is an algorithm to determine this answer? (say we set p·OTHER ANSWER:Problem:I have 200k data samples which are class imblanced (10% positive class, 90% negative class). I split the data in exactly half so my training set is 100k samples and my test set os 100k samples. I train algorithms A and B.Algorithm A discriminates between the two classes using the test set and it achieves an AUC of AUC_A. Algorithm B, which is an improvement on A, gets AUC we call AUC_B. I want to determine if AUC_B > AUC_A by chance or not (statistically significant).What is an algorithm to determine this answer? (say we set p
Migrate Documents Between Site Collections
Migrate Documents Between Site Collections
You have to make sure permission for the user performing the operations have the correct permissions.You have also make sure you are in the working scenarios. which are:Working ScenarioMoving files between libraries within the same site. For example:Moving files between libraries across different sites as long as they are directly below the root site. For example:Non-working ScenarioMoving files between libraries in sub-sites that are more than 1 level below the root site. For example:Check this URL for more details: http://paulliebrand.com/2011/06/24/sharepoint-and-the-cannot-move-filename-cannot-read-from-the-source-file-or-disk-error-via-windows-explorer-view/Their is another way to move the files between LibararyUsing the Content & Structure Method.http://community.bamboosolutions.com/blogs/sharepoint-2013/archive/2013/07/29/how-to-use-the-site-content-and-structure-manager-in-sharepoint-2013.aspxhttp://www.shareesblog.com/?p99I need to migrate a number of documents from multiple document libraries into a single document library in a new site collection. Both site collections are SP2010 and the migration must maintain version history.I tried moving documents using explorer view which is supposed to maintain version history. This works in my test site collections however it gives the error shown below when trying it on live data, and I can't find a solution.How would the migration of documents between site collections be best achieved?·OTHER ANSWER:I need to migrate a number of documents from multiple document libraries into a single document library in a new site collection. Both site collections are SP2010 and the migration must maintain version history.I tried moving documents using explorer view which is supposed to maintain version history. This works in my test site collections however it gives the error shown below when trying it on live data, and I can't find a solution.How would the migration of documents between site collections be best achieved?
Resource Reference Passing in Puppet
Yes, instead of successors, use the before meta parameter. All resources in Puppet allow the use of four parameters that establish ordering. before, require, subscribe, and notify are the four relationship meta-parameters. Subscribe and notify are particularly useful for signaling if a resource causes another resource to refresh.Please see the Puppet language guide for more information. Specifically, the keywords "ordering" and "relationships" should help you find the information you need to solve this problem.Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is:jobs::build "Build $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > 'Deploy',jobs::deploy "Deploy $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > 'Smoke Test',In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it:jobs::build "Build $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > Jobs::Deploy["Deploy $release_name"],jobs::deploy "Deploy $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > Jobs::Smoke_test["Smoke Test $release_name"],And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc..Is it possible to achieve this in puppet?·OTHER ANSWER:Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is:jobs::build "Build $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > 'Deploy',jobs::deploy "Deploy $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > 'Smoke Test',In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it:jobs::build "Build $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > Jobs::Deploy["Deploy $release_name"],jobs::deploy "Deploy $release_name": release > $release_name, jenkins_jobs_path > $jenkins_jobs_path, successors > Jobs::Smoke_test["Smoke Test $release_name"],And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc..Is it possible to achieve this in puppet?
It's Your Turn ! (Renju)
requires PHPl.join).joinn); or just download them here·OTHER ANSWER:EDITS:Added link to the plain-text (multiple strings) test dataAdded BLANK,EDGE and TIE2 test cases (see Test Data section) (11/25)Updated the JSON test cases (some lines were ill-formatted) (11/24)Renju match gone bad ! (well, technically these guys are playing Go, but who cares) WikisourceTL;DRTake a 15x15 grid of characters (made of '.', '0' and '*'), as an input, replace one of '.' s with '*',in such a way that an unbroken horizontal, vertical or diagonal line of exactly five '*' chars is formed. * * * * * * ****** * * * * * * * * *Output the coordinates of your '*', or string - 'T' if there is no solution.Creating a line of more than 5 '*' long (directly or indirectly) is not allowed, this is called an "overline".Some examples of overlines: ****.** - replacing . with * will result in a horizontal overline *.*** - replacing . with * will result in a diagonal overline * * * * *Full VersionAbstractRenju - is an abstract strategy board game (a simpler version of it is also known as Gomoku).The game is played with black and white stones on a 1515 gridded Go board.Black plays first if white did not just win, and players alternate in placing a stone of their color on an empty intersection.The winner is the first player to get an unbroken row of five stones horizontally, vertically, or diagonally.Computer search by L. Victor Allis has shown that on a 1515 board, black wins with perfect play.This applies regardless of whether overlines are considered as wins.Renju eliminates the "Perfect Win" situation in Gomoku by adding special conditions for the first player (Black).(c) WikipediaObjectiveConsider you are a part of a Renju match, you are playing black, and there is just one turn left for you to win.Take the current board position as an input, identify one of the winning moves (if there is one), and output it (see below for the data formats and restrictions).Sample input board: A B C D E F G H J K L M N O P . - vacant spot (intersection) 15 . . . . . . . . . . . . . . . 15 0 - white stone 14 . . . . . . . . . . . . . . . 14 * - black stone 13 . . . . . . . . . . . . . . . 13 X - the winning move 12 . . . . . . . . . . . . . . . 12 11 . . . . . . . . . . . . . . . 11 10 . . . . . . . 0 . . . . . . . 10 9 . . . . . . . * . . . . . . . 9 8 . . . . . . . * . . . . . . . 8 7 . . . 0 . . . * . . . . . . . 7 6 . . . 0 0 . . * . . . . . . . 6 5 . . . . . . . X . . . . . . . 5 4 . . . . . . . . . . . . . . . 4 3 . . . . . . . . . . . . . . . 3 2 . . . . . . . . . . . . . . . 2 1 . . . . . . . . . . . . . . . 1 A B C D E F G H J K L M N O P(axis labels are for illustration purposes only and are not part of the input)Sample output:H5RulesFor the purpose of this challenge, only the following subset of Renju rules apply:You can place your stone on any vacant intersection (spot);Player who achieved an unbroken row of five stones horizontally, vertically, or diagonally, wins;Black (you) may not make a move which will result (directly or indirectly) in creation of an "overline" i.e. six or more black stones in a row.Data FormatsInput data is a 15x15 grid of characters (".","*","0"]), ASCII-encoded, in any acceptable format, e.g. a two-dimensional character array or a newline separated set of strings e.t.c.You may not however modify the input alphabet in any way.Output data is a string, denoting the alpha-numeric board position, as illustrated by the sample board above.Note that original horizontal axis numeration (A..P), does not include 'I', you MAY use 'A..O' instead, w/o penalty.If there is no winning move available, output "T" instead.ScoringThis is code-golf, so the shortest answer in bytes wins !Some Tests CasesTieInput: A B C D E F G H J K L M N O P 15 . . . . . . . . . . . . . . . 15 14 . . . . . . . . . . . . . . . 14 13 . . . . . . . . . . . . . . . 13 12 . . . . . . . . . . . . . . . 12 11 . . . . . . . . . . . . . . . 11 10 . . . . . . . 0 . . . . . . . 10 9 . . . . . . . * . . . . . . . 9 8 . . . . . . . * . . . . . . . 8 7 . . . 0 . . . * . . . . . . . 7 6 . . . 0 0 . . * . . . . . . . 6 5 . . . . . . . Z . . . . . . . 5 4 . . . . . . . * . . . . . . . 4 3 . . . . . . . . . . . . . . . 3 2 . . . . . . . . . . . . . . . 2 1 . . . . . . . . . . . . . . . 1 A B C D E F G H J K L M N O POutput: "T", the only "winning move" is H5 (denoted with "Z"), but that would create an overline H4-H9OverlineInput: A B C D E F G H J K L M N O P 15 . . . . . . . . . . . . . . . 15 14 . . . . . . . . . . . . . . . 14 13 . . . . . . . . . . . . . . . 13 12 . . . . . . . . . . . . . . . 12 11 . . . . . . . . . . . . . . . 11 10 . . . . . . . . . . . . . . . 10 9 . . . . . 0 . . . . 0 . . . . 9 8 . . 0 . . . . . . * 0 0 . . . 8 7 . . 0 . . . . . * . . 0 . . . 7 6 . . 0 * . 0 . * . . 0 0 . . . 6 5 . . 0 . * . * X * * * . . . . 5 4 . . . . . Z . . . . . . . . . 4 3 . . . . . . * . . . . . . . . 3 2 . . . . . . . * . . . . . . . 2 1 . . . . . . . . * . . . . . . 1 A B C D E F G H J K L M N O POutput: "H5", the other possible winning move is "F4", but that would result in J1-D6 diagonal overline.Multiple winning movesInput: A B C D E F G H J K L M N O P 15 . * * * . . * * * . . . . . . 15 14 . * . . . . . * . . * . . . . 14 13 . * . . . . . . . . . . . * . 13 12 . . . . . 0 0 0 0 0 0 0 . * . 12 11 . . . . 0 . . X . . . 0 . * * 11 10 . . . . 0 X * * * * X 0 . . * 10 9 . . . . 0 . . * . . . 0 . * * 9 8 . . . . 0 0 . * . . 0 0 . * . 8 7 . . . . . 0 . * . . 0 . . * . 7 6 . . . . . 0 . X . . 0 . . . . 6 5 . . . . . 0 0 0 0 0 0 . . . * 5 4 . . . . . . . . . . . . . . . 4 3 . . . . . . . . . . . . . . . 3 2 . . . . . . . . . . . . . . . 2 1 . . . . . . . . . . . . . . . 1 A B C D E F G H J K L M N O POutput:"H6" or "H11" or "F10" or "L10" (one of)Test DataThe test data in a bit more machine-friendly form (JSON) is available HEREAdded BLANK,EDGE and TIE2 test cases: UPDATED TEST SUITEYou can use the following JS snippet to convert it to multiline strings, if necessary:for(B in RENJU) console.log(B"n"RENJU[B].map(l>l.join).joinn); or just download them here
لايوجد بيانات