Jobs
Interviews

54 Data Infrastructure Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

27 - 30 Lacs

Bengaluru

Hybrid

Key Skills: Data Infrastructure, Analytical Skill, Linear Regression, Tableau, Pyspark, Python Roles & Responsibilities: Support end-to-end feature testing and implementation within the application, including data management tasks. Identify business challenges by analysing market units, gathering insights, and assessing effort versus business value. Analyze production and sales pipelines, assist with broker and client analysis, and deliver actionable insights. Oversee the data quality framework and collaborate with technology teams to ensure consistent definitions for sales-related data within the application. Identify opportunities to improve and standardize reporting and analytical processes. Design and implement internal process enhancements, such as automating manual testing, optimizing delivery workflows, and improving infrastructure scalability. Enhance application functionality and user experience through data-driven root cause analysis and UI improvements. Prepare and document pipeline reports or analyses aligned with sales strategies. Communicate solutions to business stakeholders and incorporate feedback for continuous improvement. Experience in conducting client satisfaction surveys (e.g., Net Promoter Score - NPS). Expirence Requirments: 5-10 years of relevant experience in predictive analytics techniques such as Logistic Regression, Linear Regression, Market Basket Analysis, Time Series, Random Forest, Neural Networks, etc. Hands-on experience with Python/Pyspark and R. Strong proficiency in Microsoft Word, Excel, and PowerPoint; experienced with relational databases (SQL) and BI tools such as Microsoft BI, Palantir, and Tableau. Self-motivated, well-organized, and capable of managing multiple priorities while meeting tight deadlines. Skilled in communicating insights effectively through data visualization and presentations. Experience working with diverse countries and cultures is an advantage. Prior experience in the Commercial Insurance industry is a plus. Qualifications: Any Graduation, Any Post Graduation in Computer Science, Computer Engineering.

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, your primary role will involve designing, building, and maintaining data pipelines and infrastructure to support data-driven initiatives. You will be responsible for ensuring that data is collected, stored, and processed efficiently to enable analysis and business use. Your key responsibilities will include designing, implementing, and optimizing end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Additionally, you will be expected to build and maintain data infrastructure to enable organizations to effectively leverage data. You will also play a crucial role in designing and maintaining data models, schemas, and database structures to support analytical and operational use cases. Ensuring data quality, accuracy, and security throughout the data lifecycle will be a key focus area in your role. Collaboration with data scientists, analysts, and other stakeholders will be essential to understand data requirements and deliver effective solutions. Problem-solving skills will be crucial as you identify and address data-related challenges to ensure that data is readily available for analysis and decision-making. To excel in this role, you will need to stay up-to-date with the latest data engineering technologies and tools. This will enable you to leverage the most effective solutions for data pipeline design, data infrastructure maintenance, and data modeling. This is a full-time, permanent position suitable for fresher candidates. The work schedule will be during the day shift and morning shift. Performance bonuses will be provided, and the work location will be in person. Benefits include food provided during work hours, enhancing your overall work experience and ensuring your well-being at the workplace.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. In this role, you will be a senior contractor engaged on a 2.5-month remote assignment with the potential to extend. We are looking for candidates with required skills who can work independently as well as within a team environment. Your responsibilities will include facilitating, guiding, and influencing the client and teams towards an effective architectural pattern. You will become an interface between business leadership, technology leadership, and the delivery teams. Additionally, you will perform Migration Assessments and Produce Migration Plans that encompass Total Cost of Ownership (TCO), Migration Architecture, Migration Timelines, Application Waves, designing solution architecture on Google Cloud to support critical workloads, and Heterogeneous Oracle Migrations to Postgres or Spanner. You will design a migration path that accounts for the conversion of Application Dependencies, Database objects, Data, Data Pipelines, Orchestration, Users, and Security. Your role will also involve overseeing migration activities and providing troubleshooting support, including translation of DDL and DML, executing data transfers using native Google Cloud and 3rd party tools, and setting up and configuring relative Google Cloud components. Furthermore, you will engage with customer teams as a Google Cloud expert to provide Education Workshops, Architectural Recommendations, Technology reviews, and recommendations. Qualifications: - 5+ years of experience with data engineering, cloud architecture, or working with data infrastructure. - 5+ years of Oracle database management and IT experience. - Experience with Oracle Database adjacent products like Golden Gate and Data Guard. - 3+ years of PostgreSQL experience. - Proven experience in performing performance testing and applying remediations to address performance issues. - Experience in designing data models. - Proficiency in Python programming language and SQL. - Advanced SQL skills, including the ability to write, tune, and interpret SQL queries; tool-specific experience in the database platforms listed above is ideal. - Proven experience in migrating and/or implementing cloud databases like Cloud SQL, Spanner, and Bigtable. Desired Skills: - Google Cloud Professional Architect and/or Data Engineer Certification is preferred. 66degrees is committed to protecting your privacy and handles personal information in accordance with the California Consumer Privacy Act (CCPA).,

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

7 - 12 Lacs

Kozhikode, Kerala, India

On-site

MALABAR GOLD & DIAMONDS is seeking a highly skilled and proactive Data Science Engineer to join our team. You will be instrumental in building and optimizing our data infrastructure, creating robust data pipelines, and developing analytical tools that provide critical insights into business performance. If you are passionate about working with large datasets, leveraging cloud technologies, and driving data-driven decision-making, we invite you to contribute to our growing analytical capabilities. Role & Responsibilities Putting together large, intricate data sets to satisfy both functional and non-functional business needs. Determining, creating, and implementing internal process improvements , such as redesigning infrastructure for increased scalability, improving data delivery, and automating manual procedures. Building necessary infrastructure using AWS and SQL technologies . This will enable effective data extraction, transformation, and loading from a variety of data sources. Reformulating existing frameworks to maximize their functioning . Building analytical tools that make use of the data flow and offer a practical understanding of crucial company performance indicators like operational effectiveness and customer acquisition. Helping stakeholders, including the data, design, product, and executive teams, with technical data difficulties . Working on data-related technical challenges while collaborating with stakeholders, including the Executive, Product, Data, and Design teams, to support their data infrastructure needs . Remaining up-to-date with developments in technology and industry norms to produce higher-quality results.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Kolkata

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Kolkata

Work from Office

We are seeking a highly skilled and experienced Hadoop Administrator to join our dynamic team. The ideal candidate will have extensive experience in managing and optimizing Hadoop clusters, ensuring high performance and availability. You will work with a variety of big data technologies and play a pivotal role in managing data integration, troubleshooting infrastructure issues, and collaborating with cross-functional teams to streamline data workflows. Key Responsibilities : - Install, configure, and maintain Hadoop clusters, ensuring high availability, scalability, and performance. - Manage and monitor various Hadoop ecosystem components, including HDFS, YARN, Hive, Impala, and other related technologies. - Oversee the integration of data from Oracle Flexcube and other source systems into the Cloudera Data Platform. - Troubleshoot and resolve complex issues related to Hadoop infrastructure, performance, and applications. - Collaborate with cross-functional teams including data engineers, analysts, and architects to optimize data workflows and processes. - Implement and manage data backup, recovery plans, and disaster recovery strategies for Hadoop clusters. - Perform regular health checks on the Hadoop ecosystem, including managing logs, capacity planning, and system updates. - Develop, test, and optimize scripts to automate system maintenance and data management tasks. - Ensure compliance with internal security policies and industry best practices for data protection. - Provide training and guidance to junior team members and help in knowledge sharing within the team. - Create and maintain documentation related to Hadoop administration processes, system configurations, troubleshooting steps, and best practices. - Stay updated with the latest trends in Hadoop technologies and suggest improvements and new tools as necessary. Qualifications : - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - 5+ years of hands-on experience in Hadoop administration, with a preference for candidates from the banking or financial sectors. - Strong knowledge of Oracle Flexcube, Cloudera Data Platform, Hadoop, Hive, Impala, and other big data technologies. - Proven experience in managing and optimizing large-scale Hadoop clusters, including cluster upgrades and performance tuning. - Expertise in configuring and tuning Hadoop-related services (e.g., HDFS, YARN, MapReduce). - Strong understanding of data security principles and implementation of security protocols within Hadoop. - Excellent analytical, troubleshooting, and problem-solving skills. - Strong communication and interpersonal skills with the ability to work collaboratively within cross-functional teams. - Ability to work independently, manage multiple priorities, and meet deadlines. - Certification in Hadoop administration or related fields is a plus. - Experience with scripting languages such as Python, Shell, or Perl is desirable.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Pune

Work from Office

We are looking for a highly skilled and experienced Data Engineer with over 5 years of experience to join our growing data team. The ideal candidate will be proficient in Databricks, Python, PySpark, and Azure, and have hands-on experience with Delta Live Tables. In this role, you will be responsible for developing, maintaining, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. You will collaborate with cross-functional teams to build robust data infrastructure and enable data-driven decision-making. Key Responsibilities: .Design, develop, and manage scalable and efficient data pipelines using PySpark and Databricks .Build and optimize Spark jobs for processing large volumes of structured and unstructured data .Integrate data from multiple sources into data lakes and data warehouses on Azure cloud .Develop and manage Delta Live Tables for real-time and batch data processing .Collaborate with data scientists, analysts, and business teams to ensure data availability and quality .Ensure adherence to best practices in data governance, security, and compliance .Monitor, troubleshoot, and optimize data workflows and ETL processes .Maintain up-to-date technical documentation for data pipelines and infrastructure components Qualifications: 5+ years of hands-on experience in Databricks platform development. Proven expertise in Delta Lake and Delta Live Tables. Strong SQL and Python/Scala programming skills. Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure). Familiarity with data modeling and data warehousing concepts.

Posted 1 month ago

Apply

9.0 - 14.0 years

9 - 14 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: Maimsd Technology is seeking a highly skilled and experienced Lead Data Engineer to design, deploy, and maintain our entire data infrastructure, data products, and data pipelines. This role requires leveraging software engineering principles to create fully automated, resilient, modular, flexible, scalable, reusable, and cost-effective data transformation pipelines. The Lead Data Engineer will oversee a variety of storage and computation technologies within the Microsoft Azure ecosystem, handling diverse data types and volumes. Key Responsibilities: Oversee the entire data infrastructure to ensure scalability, operational efficiency, and resiliency. Mentor junior data engineers within the organization, fostering their growth and skill development. Design, develop, and maintain robust data pipelines and ETL processes using a comprehensive suite of Microsoft Azure services , including but not limited to Azure Data Factory, Azure Synapse, Azure Databricks, and Azure Fabric. Effectively utilize Azure data storage accounts (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage) for organizing and maintaining data pipeline outputs. Collaborate extensively with data scientists, data analysts, data architects, and other stakeholders to deeply understand data requirements and deliver high-quality, impactful data solutions. Optimize data pipelines within the Azure environment for maximum performance, scalability, and reliability. Ensure stringent data quality and integrity through the implementation of advanced data validation techniques and frameworks. Develop and maintain thorough documentation for all data processes, configurations, and best practices. Proactively monitor and troubleshoot data pipeline issues to ensure timely resolution and minimize downtime. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge and competitive. Manage the CI/CD (Continuous Integration/Continuous Deployment) process for deploying and maintaining data solutions. Background & Skills: Proven experience in a senior or lead role, utilizing software engineering principles for data infrastructure. Strong expertise in designing, developing, and maintaining fully automated data transformation pipelines. In-depth practical experience with a wide variety of storage and computation technologies. Extensive hands-on experience with Microsoft Azure data services such as Azure Data Factory, Azure Synapse, Azure Databricks, and Azure Fabric. Proficient in using Azure data storage solutions like Azure Data Lake Storage Gen 2 and Azure Blob storage. Demonstrated ability to optimize data pipelines for performance, scalability, and reliability in a cloud environment. Experience with data quality and integrity assurance through validation techniques. Familiarity with CI/CD processes for data solutions. Excellent collaboration and communication skills for working with cross-functional teams (data scientists, analysts, architects). Strong problem-solving abilities and a commitment to continuous learning in emerging data technologies.

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Jaipur

Work from Office

Job Overview As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data.We are seeking an experienced and strategic Data and Reporting Lead to shape our data strategy and drive data-driven decision-making across the organization. This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting. Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives. Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy. Qualifications Experience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM). Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus. Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders.Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Kochi

Work from Office

This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives. Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders. Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Mumbai

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Kolkata

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

6.0 - 8.0 years

9 - 13 Lacs

Mumbai

Work from Office

Job Title : Sr.Data Engineer Ontology & Knowledge Graph Specialist. Department : Platform Engineering. Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

6.0 - 8.0 years

9 - 13 Lacs

Kolkata

Remote

Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

6.0 - 8.0 years

9 - 13 Lacs

Chennai

Work from Office

This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 month ago

Apply

3.0 - 5.0 years

20 - 22 Lacs

Udaipur

Work from Office

3-5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing

Posted 1 month ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Noida

Work from Office

MAQ LLC d.b.a MAQ Software hasmultiple openings at Redmond, WA for: Software Data Operations Engineer (BS+2) Responsible for gathering & analyzing business requirements from customers. Implement,test and integrate software applications for use by customers. Develop &review cost effective data architecture to ensure appropriateness with currentindustry advances in data management, cloud & user experience. Automateuser test scenarios, debug & fix errors in cloud-based data infrastructure,reporting applications to meet customer needs. Must be able to traveltemporarily to client sites and or relocate throughout the United States. Requirements:Bachelors Degree or foreign equivalent in Computer Science, ComputerApplications, Computer Information Systems, Information Technology or relatedfield with two years of work experience in job offered, software engineer, systemsanalyst or related job.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Chennai

Work from Office

What You ll Need 5 years of experience in scripting languages such as Python, Javascript or Typescript Familiarity with low-code/no-code platforms such as Zapier Ability to adopt to or learn other languages such as XML, internal scripting languages, etc. Proven collaborator with multiple stakeholders, including operations, engineering, and data infrastructure Strong communication skills, high attention to detail and proven ability to use metrics to drive decisions A sense of ownership and a passion for delighting customers through innovation and creative solutions to complex problems About the Role We re seeking innovative problem-solvers with expertise in automation, scripting, and process optimization to help us scale and redefine the industry. If you thrive on collaboration and creating impactful solutions, come help us fix what s broken in real estate and transform the way people move. What You ll Do Support operating teams with building and maintaining scripted, automated solutions to minimize need for repetitive, manual effort; responsive to real-time, time-sensitive operational needs Partner with engineering team to build products and tools, as well as evolve existing ones; tools focus on automation and process optimization for listings Contribute to all phases of process and tool development including ideation, prototyping, design, production and testing; iterates on final product for continued improvement

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

As a Senior SRE at Triomics, you will: Architect, deploy, and manage robust, secure, and scalable infrastructure across AWS, Azure, and GCP Design and implement CI/CD pipelines using Jenkins to support rapid development and deployment cycles Orchestrate containers using Kubernetes and Docker for high-availability applications Implement Infrastructure as Code (IaC) using Terraform and Helm Set up and enforce secret management practices and security protocols across environments Automate workflows using Python and Bash Manage and optimize data infrastructure including PostgreSQL and Redis Administer Linux servers and handle in-depth troubleshooting Configure network setups and enforce security hardening techniques Deploy and monitor AI/ML workloads in production, ensuring performance and reliability Build monitoring and logging solutions using modern tools to support production-grade observability Support multi-tenant and single-tenant customer deployments with strong isolation and SLA guarantees Collaborate with engineering teams to define and maintain deployment workflows Write and maintain clear and comprehensive technical documentation and SOPs Requirements Minimum 6 years of experience in DevOps engineering Proven track record of 2+ years longevity in each prior role Strong experience in multi-cloud management (Azure, AWS, GCP) Solid background in Kubernetes, Docker, Jenkins, Terraform, Helm Deep understanding of security best practices and secret management Proficiency in Python and Bash scripting Experience with Linux system administration, network configuration, and security hardening Hands-on experience in PostgreSQL, Redis Demonstrated experience with monitoring, logging, and incident response systems Prior experience at a Y Combinator-backed startup or a similarly reputable startup is mandatory Experience deploying and scaling AI/ML workloads in production Excellent communication and documentation skills Software development experience is a significant plus Healthcare industry exposure is a significant plus

Posted 1 month ago

Apply

10.0 - 18.0 years

30 - 35 Lacs

Hyderabad

Remote

Role : Solution Architect Company : Feuji Software Solutions Pvt Ltd. Mode of Hire : Permanent Position Experience : 10+ Years Work Location : Hyderabad/ Remote About Feuji Feuji, established in 2014 and headquartered in Dallas, Texas, has rapidly emerged as a leading global technology services provider. With strategic locations including a Near Shore facility in San Jose, Costa Rica, and Offshore Delivery Centers in Hyderabad, and Bangalore, we are well-positioned to cater to a diverse clientele. Our team of 600 talented engineers drives our success, delivering innovative solutions to our clients and contributing to our recognition as a 'Best Place to Work For.' We collaborate with a wide range of clients, from startups to industry giants in sectors like Healthcare, Education, IT, and engineering, enabling transformative changes in their operations. Through partnerships with top technology providers such as AWS, Checkpoint, Gurukul, CoreStack, Splunk, and Micro Focus, we empower our clients' growth and innovation. With a clientele including Microsoft, HP, GSK, and DXC Technologies, we specialize in managed cloud services, cybersecurity, Product and Quality Engineering Services, and Data and Insights solutions, tailored to drive tangible business outcomes. Our commitment to creating 'Happy Teams' underscores our values and dedication to positive impact. Feuji welcomes exceptional talent to join our team, offering a platform for growth, development, and a culture of innovation and excellence. Key Responsibilities Design and implement scalable, secure, and resilient cloud solutions tailored to enterprise needs Architect hybrid solutions that integrate on-premises infrastructure with cloud services, focusing on seamless connectivity and data flow Develop and manage cloud networking solutions, including virtual networks, subnets, VPN gateways, ExpressRoute, and traffic management Ensure secure and optimized connectivity between on-premises environments and Azure cloud Implement and oversee cloud security best practices, including identity and access management (IAM), encryption, firewalls, and security monitoring Analyze and compare the cost implications of on-premises vs. cloud solutions Optimize resources to balance performance with cost-effectiveness, providing recommendations for cost-saving strategies Design and implement comprehensive disaster recovery (DR) plans, ensuring business continuity for enterprise applications Work closely with clients to understand their business requirements and translate them into technical solutions Provide strategic guidance on cloud adoption, migration, and optimization to senior stakeholders Lead technical workshops, training sessions, and presentations for clients and internal teams Oversee the end-to-end delivery of cloud solutions, ensuring projects are completed on time, within scope, and within budget Collaborate with cross-functional teams to ensure the successful deployment of solutions Develop and maintain comprehensive technical documentation, including architecture diagrams, configuration guides, and operational procedures Ensure all documentation is up-to-date and accessible to relevant stakeholders Skills Knowledge and Expertise Required Qualifications: 10+ years of Azure experience 5+ years of solution architecture experience Proven experience in designing and implementing enterprise-scale solutions Experience with on-premises infrastructure, cloud migration strategies, and cost optimization Experience in managing large-scale projects 5+ years of Kubernetes experience Data infrastructure experience Terraform experience Cloud certifications Excellent communication skills Strong multi-tasker Self starter Team player Preferred Qualifications: Consulting experience Azure, AWS and GCP Professional level certifications Kubernetes certifications (CKA, CKAD, CKS)

Posted 1 month ago

Apply

8.0 - 12.0 years

15 - 20 Lacs

Pune

Work from Office

We are looking for a highly experienced Lead Data Engineer / Data Architect to lead the design, development, and implementation of scalable data pipelines, data Lakehouse, and data warehousing solutions. The ideal candidate will provide technical leadership to a team of data engineers, drive architectural decisions, and ensure best practices in data engineering. This role is critical in enabling data-driven decision-making and modernizing our data infrastructure. Key Responsibilities: Act as a technical leader responsible for guiding the design, development, and implementation of data pipelines, data Lakehouse, and data warehousing solutions. Lead a team of data engineers, ensuring adherence to best practices and standards. Drive the successful delivery of high-quality, scalable, and reliable data solutions. Play a key role in shaping data architecture, adopting modern data technologies, and enabling data-driven decision-making across the team. Provide technical vision, guidance, and mentorship to the team. Lead technical design discussions, perform code reviews, and contribute to architectural decisions.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Bengaluru

Work from Office

Primary Skills - Snowflake, DBT, AWS; Good to have Skills - Fivetran (HVR), Python Responsibilities: Design, develop, and maintain data pipelines using Snowflake, DBT, and AWS. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize and troubleshoot existing data workflows to ensure efficiency and reliability. Implement best practices for data management and governance. Stay updated with the latest industry trends and technologies to continuously improve our data infrastructure. Required Skills: Proficiency in Snowflake, DBT, and AWS. Experience with data modeling, ETL processes, and data warehousing. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Skills: Knowledge of Fivetran (HVR) and Python. Familiarity with data integration tools and techniques. Ability to work in a fast-paced and agile environment. Education: Bachelor's degree in Computer Science, Information Technology, or a related field.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies