Given the difficulty in accurately determining the total amount of retained hydrocarbons in shale with current experimental techniques, this study aims to achieve a precise evaluation of shale oil content. Using sealed core samples from well H in the Subei Basin as the subjects, this study employed multiple experimental methods, including freeze pyrolysis, multi-temperature step pyrolysis, sealed thermal release, and two-dimensional nuclear magnetic resonance (2D-NMR) to systematically evaluate the oil content and mobility. Through comparative pyrolysis experiments at different storage times and sealed thermal release experiments, the light hydrocarbon recovery coefficient for shale oil in the second member of the Funing Formation was determined to be 1.99. Combined with the difference in pyrolysis <italic>S</italic><sub>2</sub> peak areas before and after extraction, a heavy hydrocarbon correction formula was established (0.452 6×<italic>S</italic><sub>2</sub>-0.307 9), enabling accurate calculation of the total retained hydrocarbons. Furthermore, 2D-NMR technology was used to calibrate crude oil of different qualities, and a standard curve between hydrogen nucleus signal intensity and oil mass was established, enabling non-destructive and rapid determination of oil content. By comparing NMR spectra before and after oil washing, the <italic>T</italic>₂ cutoff values for movable and adsorbed oil were identified, facilitating the calculation of free oil content and its proportion. The experimental results showed that the oil content measured by the 2D-NMR method was highly consistent with the recovered oil content, and the proportion of free oil showed a good correlation with results from multi-temperature step pyrolysis. The technical framework of “light hydrocarbon recovery-heavy hydrocarbon correction-NMR calibration-movable oil identification” established in this study offers advantages such as relative operational simplicity, a broad detection range, and non-destructiveness to samples. Overall, it significantly improves the accuracy and efficiency of shale oil content and mobility evaluation, providing crucial experimental support for shale oil sweet spot identification, reserve calculation, and development potential assessment.
Petroleum refining. Petroleum products, Gas industry
A gasfield with reserves exceeding 100 billion cubic meters has been discovered in the Central Canyon on the southern slope of the Lingshui Sag in Qiongdongnan Basin. However, the northern slope shows poor oil and gas enrichment, with gas detected but no fields found. One of the key reasons is the absence of large-scale high-quality reservoirs encountered during drilling. To clarify the sedimentary evolution model and distribution patterns of high-quality sand bodies on the northern slope of the Lingshui Sag, this study integrated drilling, logging, mud logging, testing, and seismic data, using techniques such as thin section observation, grain size analysis, and physical property testing. Core facies, logging facies, and seismic facies analyses were carried out for the key strata to establish the sedimentary evolution model of Meishan Formation. Combined with reservoir microscopic characteristics and fault-sand matching, the oil-gas geological significance was clarified. The results showed that during the Meishan Formation period, sediment sources were provided by Hainan Island, and a shelf delta-submarine fan sedimentary system was developed. In the study area, the microfacies sand bodies of channels and channel-lobe complexes were relatively coarse and thick, with box-shaped or bell-shaped logging curves, and stratification and bioturbation were observed in the cores. Seismic data showed U-shaped or V-shaped low-frequency continuous parallel reflections, which served as the main exploration targets in the study area. The development of submarine fans and the differentiation of their internal sand bodies were mainly controlled by fluctuations in relative sea level, paleogeomorphic features, and the intensity of sediment supply. During the second member of the Meishan Formation (hereinafter referred to as Meishan 2) period, the relative sea level dropped, the sediment supply was abundant, and the relative accommodation space was relatively small, with <italic>A</italic>/<italic>S</italic> ≤ 1 (<italic>A</italic> representing relative accommodation space and <italic>S</italic> representing sediment supply). Sediments were transported over long distances to the continental slope, forming multiple phases of submarine fan progradation. Laterally, the development of submarine fans and the differences within their internal sand bodies were controlled by paleogeomorphology and distance from the sediment source, mainly developing in the proximal slope break zones and fault-controlled slope break zones formed by synsedimentary faults. The Meishan 2 reservoirs in the study area had porosity ranging from 8.40% to 26.24%, and permeability ranging from 0.05×10<sup>-3</sup> µm<sup>2</sup> to 26.49×10<sup>-3</sup> µm<sup>2</sup>, mainly characterized by medium porosity and ultra-low to low permeability. High-quality reservoirs were controlled by late-stage reworking. Contour currents could wash, transport, and redeposit gravity flow sediments formed earlier, significantly improving reservoir physical properties. Under the general background of sand deficiency in the study area, the coupling between faults and sand bodies constrained the degree of oil and gas enrichment. Drilling results showed that oil and gas were highly active near the No.2 fault zone. The sand body enrichment zone of the No.2 fault zone was an important oil and gas target for future exploration.
Petroleum refining. Petroleum products, Gas industry
To address the underperformance of jet pump deliquification in high water cut tight gas reservoirs of Qingshimao field, optimizing the jet pump deliquification technology of high water-cut tight gas reservoirs is imperative. In the paper, based on the overall production characteristics and well condition parameters of gas wells in the Qingshimao field, considering the technical characteristics of jet pump deliquification technology under the action of water and natural gas, combined with numerical analysis and test, the internal flow field of jet pump was analyzed. Then, taking Well Li⁃x as an example, the characteristics of two jet pump deliquification technologies were analyzed, and the jet pump deliquification technology for high water-cut tight gas reservoirs was optimized. The study results show that the liquid-gas alternating jet pump deliquification technology adopts short-cycle alternating operations. This approach incorporates the advantage of hydraulic jet pump deliquification technology such as strong liquid-carrying capacity, enabling rapid gas well restoration. Meanwhile, it combines the advantage of pneumatic jet pump deliquification technology such as unaffected by liquid column backpressure in the gas well production, allowing the restored well to transition directly to flow production. The liquid-gas alternating jet pump deliquification technology increases gas production by 20% compared to the hydraulic (water) jet pump deliquification technology, and increases liquid production by 116% and reduces composite cost by 32% compared to the pneumatic (natural gas) jet pump deliquification technology. The study results provide experimental basis and theoretical reference for the optimization of jet pump deliquification technology for high water-cut tight gas reservoirs.
Chemical engineering, Petroleum refining. Petroleum products
Mehdi Fadaei, Mohammad Javad Ameri, Yousef Rafiei
et al.
Abstract During oil production, the reservoir pressure declines, causing changes in the hydrocarbon components. To ensure better separation of produced phases, separator dimensions should also be adjusted. It is not possible to change the dimensions of the separator during production. Therefore, to improve the separation of the phases, the level of the separator needs to be adjusted. An intelligent system is required to ensure that the liquid level is maintained at the desired level for optimal phase separation during changes in reservoir pressure. In this study, a novel correlation is presented to measure the desired liquid level using new separator pressures. For this purpose, an intelligent system was built in the laboratory and tested in different operational conditions. The intelligent system effectively maintained the desired liquid level of the separator through a new correlation technique. The system accomplished this by acquiring new separator pressure readings collected by installed sensors. This approach helped mitigate the negative effects of the slug flow regime and minimized issues such as foam formation and over-flushing of the separator. It could achieve a 99.1% separation efficiency between gas and liquid phases. This was possible during liquid and gas flow rates ranging from 0 to 2.35 and 8–17 m3/h, respectively. The system could operate under bubble, stratified, plug, and slug flow regimes. Then the intelligent model obtained from lab experiments was integrated into the production model for the southern Iranian oil field. The smart model increased oil production by 13% and prevented the separator from over-flushing in 840 days.
The issue of coal fine production is increasingly prominent in the development of coal-bed methane. Implementing appropriate measures to control the migration and production of coal fines is crucial for achieving stable and high production of coal-bed methane wells. However, the characteristics of coal migration and production in the coal seams of Baode block remain unclear, which hinders the efficient development of coal-bed methane in some wells in this area. To address the problem of coal fine production in coal-bed methane development, core flooding experiments were conducted to investigate the migration and production characteristics of coal fines concerning influencing factors such as formation water velocity, salinity, gas-water ratio, effective stress, etc. The experimental results revealed that during the drainage stage, the amount of coal fines produced at low formation water flow is minimal, with coal fines moving within fractures and accumulating at the outlet, forming a coal powder filter cake. However, when formation water flow surpasses the critical flow, a significant amount of coal fines is produced. A substantial pressure fluctuation can flush out the coal fines obstructing the outlet. Furthermore, the salinity of the formation water plays a role in carrying coal powder, with higher salinity increasing its transport capacity. While single gas phase flow is not effective in displacing the coal fine migration and production, two-phase flow with a gas-water ratio of 50∶50 exhibits a stronger ability to carry coal powder. The concentration of coal fine in the produced liquid continued to decline with the increase of the effective stress loaded on the coal, Similarly, the holding pressure at the outlet follows a downward trend, but the displacement pressure difference increases. The research findings provide essential data and a theoretical basis for implementing on-site prevention and control of coal fine production.
Petroleum refining. Petroleum products, Gas industry
In order to promote the strategic planning for the oil import security of China, relevant influencing factors were studied by integrating document coding with the Decision Making Trial and Evaluation Laboratory-Interpretative Structural Modeling(DEMATELISM). The current situation and challenges of oil import of China were introduced based on the 4A definition of energy security in terms of supply, transportation and price. Meanwhile, the analytical framework of oil import security was established and coupled with document coding method to define the factors that influence the oil import security of China, thus forming the category of code statistics at different levels. Besides, the interrelation and hierarchical structure of factors were analyzed by DEMATEL-ISM, and suggestions were put forward on the policies. The results show that oil import security is influenced directly by superficial factors, including the production and transportation capacity of oil, and fundamentally by underlying factors, such as the stage of economic development and the structure of energy consumption. In addition, the shallow and deep factors, including the oil policy agreement, the import channel pattern, the import source structure and the construction of emergency system, are the critical points of oil import security. Therefore, special attention should be paid to the above-mentioned factors. Generally, the research results could provide theoretical and methodological support for the oil import of petroleum companies and governments.
沈可洁1,李兴飞1,华欲飞1,张钊2,曹连锋2,王才立2,张彩猛1SHEN Kejie1, LI Xingfei1, HUA Yufei1, ZHANG Zhao2, CAO Lianfeng2, WANG Caili2, ZHANG Caimeng1
为改善豌豆分离蛋白(PPI)在酸性乳液体系中的乳化稳定性,将PPI与阴离子多糖卡拉胶 (CG) 在酸性条件下混合,制备可溶性静电复合物乳液。通过测定PPI乳液和PPI/CG复合物乳液在不同pH(4~7)下粒径、ζ-电位、显微结构以及乳析指数的变化,判断两种乳液的稳定性。结果表明:pH 4~5时,PPI乳液粒径达到35 μm以上,而pH 4~7时PPI/CG复合物乳液粒径均小于18 μm;储藏14 d时,PPI乳液和PPI/CG复合物乳液粒径均稍有增加;酸性条件下,PPI乳液的ζ-电位绝对值均小于30 mV,而PPI/CG复合物乳液的ζ-电位绝对值均大于40 mV;酸性条件下,PPI/CG复合物乳液较PPI乳液分散性有明显改善;在储藏14 d过程中,PPI乳液乳析指数随储藏时间的延长而逐渐增大,而PPI/CG复合物乳液乳析指数基本为0。综上,PPI/CG复合物可显著改善PPI在酸性条件下的乳化稳定性。
To improve the emulsification stability of pea protein isolate (PPI) in acidic emulsion systems, soluble electrostatic complex emulsions were prepared by mixing PPI with anionic polysaccharide carrageenan (CG) under acidic conditions. The stability of the two emulsions was determined by measuring the changes of particle size, ζ-potential, microstructure and emulsion precipitation index at different pH(4-7) of PPI emulsion and PPI/CG complex emulsion. The results showed that the particle size of PPI emulsions reached more than 35 μm at pH 4-5, while the particle size of PPI/CG complex emulsions were less than 18 μm at pH 4-7. After storage for 14 d, the particle size of PPI emulsion and PPI/CG complex emulsion slightly increased. Under acidic conditions, the absolute value of ζ-potential of PPI emulsion was less than 30 mV, and that of PPI/CG complex emulsion was above 40 mV. The dispersion of PPI/CG complex emulsion was obviously improved compared with PPI emulsion under acidic conditions. The emulsion precipitation index of PPI emulsion basically gradually increased with the prolonging of storage time during 14 d storage, while the emulsion precipitation index of PPI/CG complex emulsion was basically 0. In conclusion, the PPI/CG complex can significantly improve the emulsion stability of PPI under acidic conditions.
This paper explores the application of Sample Entropy (SampEn) as a sophisticated tool for quantifying and predicting volatility in international oil price returns. SampEn, known for its ability to capture underlying patterns and predict periods of heightened volatility, is compared with traditional measures like standard deviation. The study utilizes a comprehensive dataset spanning 27 years (1986-2023) and employs both time series regression and machine learning methods. Results indicate SampEn's efficacy in predicting traditional volatility measures, with machine learning algorithms outperforming standard regression techniques during financial crises. The findings underscore SampEn's potential as a valuable tool for risk assessment and decision-making in the realm of oil price investments.
We consider an online version of the geometric minimum hitting set problem that can be described as a game between an adversary and an algorithm. For some integers $d$ and $N$, let $P$ be the set of points in $(0, N)^d$ with integral coordinates, and let $\mathcal{O}$ be a family of subsets of $P$, called objects. Both $P$ and $\mathcal{O}$ are known in advance by the algorithm and by the adversary. Then, the adversary gives some objects one by one, and the algorithm has to maintain a valid hitting set for these objects using points from $P$, with an immediate and irrevocable decision. We measure the performance of the algorithm by its competitive ratio, that is the ratio between the number of points used by the algorithm and the offline minimum hitting set for the sub-sequence of objects chosen by the adversary. We present a simple deterministic online algorithm with competitive ratio $((4α+1)^{2d}\log N)$ when objects correspond to a family of $α$-fat objects. Informally, $α$-fatness measures how cube-like is an object. We show that no algorithm can achieve a better ratio when $α$ and $d$ are fixed constants. In particular, our algorithm works for two-dimensional disks and $d$-cubes which answers two open questions from related previous papers in the special case where the set of points corresponds to all the points of integral coordinates with a fixed $d$-cube.
We characterize learnability for quantum measurement classes by establishing matching necessary and sufficient conditions for their PAC learnability, along with corresponding sample complexity bounds, in the setting where the learner is given access only to prepared quantum states. We first probe the results from previous works on this setting. We show that the empirical risk defined in previous works and matching the definition in the classical theory fails to satisfy the uniform convergence property enjoyed in the classical setting for some learnable classes. Moreover, we show that VC dimension generalization upper bounds in previous work are frequently infinite, even for finite-dimensional POVM classes. To surmount the failure of the standard ERM to satisfy uniform convergence, we define a new learning rule -- denoised ERM. We show this to be a universal learning rule for POVM and probabilistically observed concept classes, and the condition for it to satisfy uniform convergence is finite fat shattering dimension of the class. We give quantitative sample complexity upper and lower bounds for learnability in terms of finite fat-shattering dimension and a notion of approximate finite partitionability into approximately jointly measurable subsets, which allow for sample reuse. We then show that finite fat shattering dimension implies finite coverability by approximately jointly measurable subsets, leading to our matching conditions. We also show that every measurement class defined on a finite-dimensional Hilbert space is PAC learnable. We illustrate our results on several example POVM classes.
段旭林1,胡容1,王瑞1,王安体2,周波2,何强1,迟原龙1 DUAN Xulin1, HU Rong1, WANG Rui1, WANG Anti2, ZHOU Bo2, HE Qiang1, CHI Yuanlong1
为给浓香食用油的品质评价提供参考,以浓香菜籽油、浓香花生油和浓香亚麻籽油3种浓香食用油为研究对象,首先对其酸值、过氧化值和抗氧化活性物质含量进行测定,然后分析了其挥发性化合物组成,并基于相对气味活度值(ROAV)确定了特征风味物质,最后采用热重和差示扫描量热技术对其氧化稳定性进行分析。结果表明:3种浓香食用油的酸值(KOH)均低于12 mg/g,过氧化值均低于0.05 g/100 g;浓香菜籽油的总酚、生育酚和甾醇含量均最高,分别为224.40、451.30 mg/kg和7 789.41 mg/kg,浓香亚麻籽油的为76.14、404.95 mg/kg和3 279.39 mg/kg,浓香花生油的为56.08、263.80 mg/kg和2 617.32 mg/kg;浓香花生油的α-生育酚含量显著高于其他2种油脂,3种浓香食用油中的甾醇主要为β-谷甾醇;浓香菜籽油中主要挥发性化合物为硫苷降解物、吡嗪类和醛类,浓香花生油和浓香亚麻籽油中主要为醛类和吡嗪类,3种浓香食用油中ROAV最大的化合物分别为3-丁烯基异硫氰酸酯、异丁醛和(Z)-4-庚烯醛;3种浓香食用油共有的特征风味化合物为2,5-二甲基吡嗪;浓香菜籽油的初始氧化温度、初始分解温度和氧化诱导时间分别为204.7、228.7 ℃和138.9 min,浓香花生油的为186.2、224.5 ℃和44.0 min,浓香亚麻籽油的为168.0、208.7 ℃和16.3 min。浓香菜籽油的抗氧化活性物质含量较高,氧化稳定性最好。
In order to provide a reference for the quality evaluation of fragrant edible oil, the acid value, peroxide value and antioxidant active substances content of fragrant edible oils,including fragrant rapeseed oil, fragrant peanut oil and fragrant flaxseed oil were firstly determined. The volatile compound composition was investigated through GC-MS analysis, and the characteristic flavor compounds of these three oils were further determined according to their relative odor active value (ROAV). Furthermore, the oxidation stability of these three oils was studied through TG and DSC analysis. The results showed that the acid value and peroxide value of these three oils were all lower than 1.2 mgKOH/g and 0.05 g/100 g respectively. The contents of total phenol, tocopherol and sterol in fragrant rapeseed oil were the highest, which were 224.40, 451.30 mg/kg and 7 789.41 mg/kg, respectively; those were 76.14, 404.95 mg/kg and 3 279.39 mg/kg in fragrant flaxseed oil, respectively; those were 56.08, 263.80 mg/kg and 2 617.30 mg/kg in fragrant peanut oil, respectively. The content of α-tocopherol in fragrant peanut oil was significantly higher than that in the other two oils, and the main sterol in the three kinds of fragrant edible oil was β- sitosterol. The main volatile compounds in fragrant rapeseed oil were thioglycoside degradation products, pyrazines and aldehydes, and they were aldehydes and pyrazines mainly in fragrant peanut oil and fragrant flaxseed oil. The characteristic flavor compounds with the highest ROAV in fragrant rapeseed oil, fragrant peanut oil and fragrant flaxseed oil were 3-butenyl isothiocyanate, isobutyraldehyde, and (Z)-4-heptenal, respectively. The common characteristic flavor compound of three kinds of fragrant edible oils was 2,5-dimethylpyrazine. The initial oxidation temperature, initial decomposition temperature and oxidation induction time were determined as follows:fragrant rapeseed oil (204.7, 228.7 ℃ and 138.9 min)>fragrant peanut oil (186.2, 224.5 ℃ and 44.0 min) >fragrant flaxseed oil (168.0, 208.7 ℃ and 16.3 min). The antioxidant active substances content of fragrant rapeseed oil is higher and its oxidation stability is the best.
Convolutional Neural Networks (CNNs) demonstrate excellent performance in various applications but have high computational complexity. Quantization is applied to reduce the latency and storage cost of CNNs. Among the quantization methods, Binary and Ternary Weight Networks (BWNs and TWNs) have a unique advantage over 8-bit and 4-bit quantization. They replace the multiplication operations in CNNs with additions, which are favoured on In-Memory-Computing (IMC) devices. IMC acceleration for BWNs has been widely studied. However, though TWNs have higher accuracy and better sparsity than BWNs, IMC acceleration for TWNs has limited research. TWNs on existing IMC devices are inefficient because the sparsity is not well utilized, and the addition operation is not efficient. In this paper, we propose FAT as a novel IMC accelerator for TWNs. First, we propose a Sparse Addition Control Unit, which utilizes the sparsity of TWNs to skip the null operations on zero weights. Second, we propose a fast addition scheme based on the memory Sense Amplifier to avoid the time overhead of both carry propagation and writing back the carry to memory cells. Third, we further propose a Combined-Stationary data mapping to reduce the data movement of activations and weights and increase the parallelism across memory columns. Simulation results show that for addition operations at the Sense Amplifier level, FAT achieves 2.00X speedup, 1.22X power efficiency, and 1.22X area efficiency compared with a State-Of-The-Art IMC accelerator ParaPIM. FAT achieves 10.02X speedup and 12.19X energy efficiency compared with ParaPIM on networks with 80% average sparsity.
Petr Andriushchenko, Irina Deeva, Anna Bubnova
et al.
The work focuses on the modelling and imputation of oil and gas reservoirs parameters, specifically, the problem of predicting the oil recovery factor (RF) using Bayesian networks (BNs). Recovery forecasting is critical for the oil and gas industry as it directly affects a company's profit. However, current approaches to forecasting the RF are complex and computationally expensive. In addition, they require vast amount of data and are difficult to constrain in the early stages of reservoir development. To address this problem, we propose a BN approach and describe ways to improve parameter predictions' accuracy. Various training hyperparameters for BNs were considered, and the best ones were used. The approaches of structure and parameter learning, data discretization and normalization, subsampling on analogues of the target reservoir, clustering of networks and data filtering were considered. Finally, a physical model of a synthetic oil reservoir was used to validate BNs' predictions of the RF. All approaches to modelling based on BNs provide full coverage of the confidence interval for the RF predicted by the physical model, but at the same time require less time and data for modelling, which demonstrates the possibility of using in the early stages of reservoirs development. The main result of the work can be considered the development of a methodology for studying the parameters of reservoirs based on Bayesian networks built on small amounts of data and with minimal involvement of expert knowledge. The methodology was tested on the example of the problem of the recovery factor imputation.
Marilia Ramos, Camille Major, Nsimah Ekanem
et al.
In the petroleum industry, Quantitative Risk Analysis (QRA) has been one of the main tools for risk management. To date, QRA has mostly focused on technical barriers, despite many accidents having human failure as a primary cause or a contributing factor. Human Reliability Analysis (HRA) allows for the assessment of the human contribution to risk to be assessed both qualitatively and quantitatively. Most credible and highly advanced HRA methods have largely been developed and applied in support of nuclear power plants control room operations and in context of probabilistic risk analysis. Moreover, many of the HRA methods have issues that have led to inconsistencies, insufficient traceability and reproducibility in both the qualitative and quantitative phases. Given the need to assess human error in the context of the oil industry, it is necessary to evaluate available HRA methodologies and assess its applicability to petroleum operations. Furthermore, it is fundamental to assess these methods against good practices of HRA and the requirements for advanced HRA methods. The present paper accomplishes this by analyzing seven HRA methods. The evaluation of the methods was performed in three stages. The first stage consisted of an evaluation of the degree of adaptability of the method for the Oil and Gas industry. In the second stage the methods were evaluated against desirable items in an HRA method. The higher-ranked methods were evaluated, in the third stage, against requirements for advanced HRA methods. In addition to the methods' evaluation, this paper presents an overview of state-of-the-art discussions on HRA, led by the Nuclear industry community. It remarks that these discussions must be seriously considered in defining a technical roadmap to a credible HRA method for the Oil and Gas industry.
The traditional oil supply chain suffers from various shortcomings regarding crude oil extraction, processing, distribution, environmental pollution, and traceability. It offers an only a forward flow of products with almost no security and tracking process. In time, the system will lag behind due to the limitations in quality inspection, fraudulent information, and monopolistic behavior of supply chain entities. Inclusion of counterfeiting products and opaqueness of the system urge renovation in this sector. The recent evolution of Industry 4.0 leads to the alternation in the supply chain introducing the smart supply chain. Technological advancement can now reshape the infrastructure of the supply chain for the future. In this paper, we suggest a conceptual framework utilizing Blockchain and Smart Contract to monitor the overall oil supply chain. Blockchain is a groundbreaking technology to monitor and support the security building of a decentralized type supply chain over a peer-to-peer network. The use of the Internet of Things (IoT), especially sensors, opens broader window to track the global supply chain in real-time. We construct a methodology to support reverse traceability for each participant of the supply chain. The functions and characteristics of Blockchain and Smart Contract are defined. Implementation of Smart Contracts has also been shown with detailed analysis. We further describe the challenges of implementing such a system and validate our framework's adaptability in the real world. The paper concludes with future research scope to mitigate the restrictions of data management and maintenance with advanced working prototypes and agile systems achieving greater traceability and transparency.
Well acquisition in the oil and gas industry can often be a hit or miss process, with a poor purchase resulting in substantial loss. Recommender systems suggest items (wells) that users (companies) are likely to buy based on past activity, and applying this system to well acquisition can increase company profits. While traditional recommender systems are impactful enough on their own, they are not optimized. This is because they ignore many of the complexities involved in human decision-making, and frequently make subpar recommendations. Using a preexisting Python implementation of a Factorization Machine results in more accurate recommendations based on a user-level ranking system. We train a Factorization Machine model on oil and gas well data that includes features such as elevation, total depth, and location. The model produces recommendations by using similarities between companies and wells, as well as their interactions. Our model has a hit rate of 0.680, reciprocal rank of 0.469, precision of 0.229, and recall of 0.463. These metrics imply that while our model is able to recommend the correct wells in a general sense, it does not match exact wells to companies via relevance. To improve the model's accuracy, future models should incorporate additional features such as the well's production data and ownership duration as these features will produce more accurate recommendations.