"A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously from Knowledge Graphs" was selected for publication as a short paper! Out of 930 submissions, only 91 regular papers and 92 short papers were accepted. This corresponds to a full paper acceptance rate of 9.8% and an overall acceptance rate 19.7%. Enclosed at the bottom of this message, please find the review report for your paper. Papers went through a rigorous review process. Each paper was reviewed by at least three program committee members. Please carefully consider the comments of the reviewers when preparing the final version of your paper. Congratulations and Best Wishes, Claudia Plant and Haixun Wang (ICDM PC Chairs) ===================================================================== --======== Review Reports ========-- The review report from reviewer #1: *1: Is the paper relevant to ICDM? [_] No [X] Yes *2: How innovative is the paper? [_] 6 (Very innovative) [X] 3 (Innovative) [_] -2 (Marginally) [_] -4 (Not very much) [_] -6 (Not at all) *3: How would you rate the technical quality of the paper? [_] 6 (Very high) [X] 3 (High) [_] -2 (Marginal) [_] -4 (Low) [_] -6 (Very low) *4: How is the presentation? [_] 6 (Excellent) [X] 3 (Good) [_] -2 (Marginal) [_] -4 (Below average) [_] -6 (Poor) *5: Is the paper of interest to ICDM users and practitioners? [X] 3 (Yes) [_] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [X] 2 (High) [_] 1 (Medium) [_] 0 (Low) *7: Overall recommendation [_] 6: must accept (in top 25% of ICDM accepted papers) [X] 3: should accept (in top 80% of ICDM accepted papers) [_] -2: marginal (in bottom 20% of ICDM accepted papers) [_] -4: should reject (below acceptance bar) [_] -6: must reject (unacceptable: too weak, incomplete, or wrong) *8: Summary of the paper's main contribution and impact This paper proposes an interesting approach for mixing symbolic and sub-symbolic approaches to link prediction via an iterative mutual data augmentation process. *9: Justification of your recommendation The approach is fairly innovative and propose a new exploration avenue for researchers exploring neuro-symbolic approaches. *10: Three strong points of this paper (please number each point) - Innovative approach - Well written - Robust results *11: Three weak points of this paper (please number each point) - Results are somewhat sub-par for some datasets compared with significantly simpler approaches, e.g. https://github.com/facebookresearch/kbc/ - Misses some very relevant literature in this space *12: Is this submission among the best 10% of submissions that you reviewed for ICDM'20? [X] No [_] Yes *13: Would you be able to replicate the results based on the information given in the paper? [X] No [_] Yes *14: Are the data and implementations publicly available for possible replication? [X] No [_] Yes *15: If the paper is accepted, which format would you suggest? [_] Regular Paper [X] Short Paper *16: Detailed comments for the authors Please see my comments above. My main criticism is that this paper is missing some literature in the space of neural rule induction for knowledge base completion, e.g. - https://arxiv.org/abs/1711.05851 - https://arxiv.org/abs/1912.10824 - https://arxiv.org/abs/2007.06477 - https://dl.acm.org/citation.cfm?id=2851841 - https://arxiv.org/abs/1707.07596 Note: in Algorithm 1, on line 7, G_0^+ should probably be G_i^+ ======================================================== The review report from reviewer #2: *1: Is the paper relevant to ICDM? [_] No [X] Yes *2: How innovative is the paper? [_] 6 (Very innovative) [_] 3 (Innovative) [X] -2 (Marginally) [_] -4 (Not very much) [_] -6 (Not at all) *3: How would you rate the technical quality of the paper? [_] 6 (Very high) [_] 3 (High) [X] -2 (Marginal) [_] -4 (Low) [_] -6 (Very low) *4: How is the presentation? [_] 6 (Excellent) [_] 3 (Good) [X] -2 (Marginal) [_] -4 (Below average) [_] -6 (Poor) *5: Is the paper of interest to ICDM users and practitioners? [X] 3 (Yes) [_] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [X] 2 (High) [_] 1 (Medium) [_] 0 (Low) *7: Overall recommendation [_] 6: must accept (in top 25% of ICDM accepted papers) [_] 3: should accept (in top 80% of ICDM accepted papers) [X] -2: marginal (in bottom 20% of ICDM accepted papers) [_] -4: should reject (below acceptance bar) [_] -6: must reject (unacceptable: too weak, incomplete, or wrong) *8: Summary of the paper's main contribution and impact This paper proposes a hybrid framework to learn embedding and rules for knowledge graph (KG) completion. The proposed framework is general and most KG embedding approaches can be applied to it. The comprehensive experiments and analyses show the superiority of the proposed framework. *9: Justification of your recommendation Although there are some unclear parts in the rule mining, the proposed framework is still sound. The experiments and analyses are comprehensive and show the advantages of this paper over the state-of-the-art approaches. *10: Three strong points of this paper (please number each point) 1. The related work is sufficient 2. The experiments and analyses are comprehensive. 3. The case study shows the advantages of the proposed framework over the state-of-the-art approaches. *11: Three weak points of this paper (please number each point) 1. There are some unclear parts in the rule mining. 2. The importance of “importance sampling” is not mentioned. Also, there is no experiment to show the contribution of being equipped with the importance sampling. 3. There should be sensitivity tests on \beta, \omega, and K. *12: Is this submission among the best 10% of submissions that you reviewed for ICDM'20? [X] No [_] Yes *13: Would you be able to replicate the results based on the information given in the paper? [_] No [X] Yes *14: Are the data and implementations publicly available for possible replication? [X] No [_] Yes *15: If the paper is accepted, which format would you suggest? [X] Regular Paper [_] Short Paper *16: Detailed comments for the authors 1. There are some unclear parts in the rule mining. (1) In Sec. II (Background), r(X,Y) is unclear since X and Y are not introduced. Moreover, providing an example in this section may help. (2) In Sec. V-B (Rule Mining with Embedding Feedback), the difference between a variable and an entity is unclear. Also, providing an example may help. (3) In Sec. V-B (Rule Mining with Embedding Feedback), the increase with respect to the rule quality measure Q is unclear. What does a candidate rule compare to, in order to check whether there is an increase? (4) In Sec. VII-A (Computational Complexity), there is no exact complexity of the rule mining. 2. Table I is too far away. 3. Unlike the association rule mining that considers both the support and confidence, why does the rule mining consider the confidence but not the support? 4. Table III should be interchanged with Table II since Table III is mentioned earlier. 5. In the experiments, applying RotatE as the embedding approach should be mentioned in the setting. ======================================================== The review report from reviewer #3: *1: Is the paper relevant to ICDM? [_] No [X] Yes *2: How innovative is the paper? [_] 6 (Very innovative) [_] 3 (Innovative) [X] -2 (Marginally) [_] -4 (Not very much) [_] -6 (Not at all) *3: How would you rate the technical quality of the paper? [_] 6 (Very high) [_] 3 (High) [X] -2 (Marginal) [_] -4 (Low) [_] -6 (Very low) *4: How is the presentation? [_] 6 (Excellent) [_] 3 (Good) [X] -2 (Marginal) [_] -4 (Below average) [_] -6 (Poor) *5: Is the paper of interest to ICDM users and practitioners? [_] 3 (Yes) [X] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [_] 2 (High) [X] 1 (Medium) [_] 0 (Low) *7: Overall recommendation [_] 6: must accept (in top 25% of ICDM accepted papers) [X] 3: should accept (in top 80% of ICDM accepted papers) [_] -2: marginal (in bottom 20% of ICDM accepted papers) [_] -4: should reject (below acceptance bar) [_] -6: must reject (unacceptable: too weak, incomplete, or wrong) *8: Summary of the paper's main contribution and impact This paper presents a hybrid model for knowledge graph reasoning that leverages the complementary properties of embedding methods and logical rule mining methods. *9: Justification of your recommendation This paper combines two complementary approaches for learning embeddings and logical rules on knowledge graphs. The proposed method is interesting, but there lacks clarity in both methodology and empirical evaluation. *10: Three strong points of this paper (please number each point) 1. The proposed framework is an iterative process that embedding learning and rule mining can reinforce each other. 2. Experiments on benchmark datasets show significant improvement over existing baselines. *11: Three weak points of this paper (please number each point) 1. It is unclear what is the stopping criterion of the hybrid learning procedure. 2. The presentation and organization of the paper should be improved. 3. The effect of hyper-parameters on the overall performance is not discussed. *12: Is this submission among the best 10% of submissions that you reviewed for ICDM'20? [X] No [_] Yes *13: Would you be able to replicate the results based on the information given in the paper? [X] No [_] Yes *14: Are the data and implementations publicly available for possible replication? [X] No [_] Yes *15: If the paper is accepted, which format would you suggest? [X] Regular Paper [_] Short Paper *16: Detailed comments for the authors 1. The proposed hybrid architecture consists of two iterative processes: embedding learning and rule mining. However, it is not clear what is the stopping criterion of the hybrid learning procedure. 2. Some necessary experimental details are missing. - The effect of hyper-parameter \omega in Eq (7) on the overall performance is not discussed. - The rule evaluation results on Table IV lack a lot of clarity. Only precision is evaluated in this part. Why not use Recall or F1-score instead? Necessary justification is missing. Also, on WN18RR, why the results from top-50 to top 500 are missing? ======================================================== Meta Review: The opinions of the reviewers diverge somewhat. The main criticisms brought forward seem to relate to some missing out some related works (for which they also give detailed references), and to some unclarities in the presentation. In the discussion, the opinion emerged that the work is nevertheless sufficiently novel, and most of its claims are supported by experimental analysis. So that the presentation could be improved (or also possibly condensed).