Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview This position is for Lead Data Engineer in the Commercial Data as a Service group. In this position you will enjoy being responsible for helping define and maintain the data systems key to delivering successful outcomes for our customers. You will be hands on and work closely to guide a team of Data Engineers in the associated data maintenance, integrations, enhancements, loads and transformation processes for the organization. This key individual will work closely with Data Architects to design and implement solutions and insure successful implementations. Role Leads initiatives to build and maintain database technologies, environments, and applications, seeking opportunities for improvements and efficiencies Architects internal data solutions as part of the full stack to include data modelling, integration with file based as well as event driven upstream systems Writes SQL statement procedures to optimize SQL execution and query development Effectively utilizes various tools such as Spark (Scala, Python), Nifi, Spark streaming, Informatica for data ETL, Manages the deployment of data solutions that are optimally standardized and database updates to meet project deliverables Leads database security posture, which includes proactively identifying security risks and implementing both risk mitigation plans and control functions Oversees the resolution of chronic complex problems to prevent future data performance issues Supports process improvement efforts to identify and test opportunities for automation and/or reduction in time to deployment Responsible for complex design (in conjunction with Data Architects), development, and performance and system testing, and provides functional guidance, advice to experienced engineers Mentors junior staff by providing training to develop technical skills and capabilities across the team All about you Experience developing a specialization in a particular functional area (e.g., modeling, data loads, transformations, replication, performance tuning, logical and physical database design, performance and troubleshooting, data replication, backup and recovery, and data security) leveraging Apache Spark, Nifi, Databricks, Snowflake, Informatica, streaming solutions. Experience leading a major work stream or multiple smaller work streams for a large domain initiative, often providing technical guidance and advice to project team members Experience creating deliverables within the global database technology domains and sub-domains, supporting cross-functional leaders in the technical community to derive new solutions Experience supporting automation and/or cloud delivery effort; may perform financial and cost analysis Experience in database architecture or other relevant IT experience Experience in leading business system application and database architecture design, influencing technology direction in range of breadth of IT areas Show more Show less
Posted 1 week ago
9.0 - 14.0 years
15 - 25 Lacs
Hyderabad, Pune, Delhi / NCR
Hybrid
Role & responsibilities 7+ years of coding experience in #Java, #Microservices, #Spring Boot, #API development Experienced in application support using AWS, Azure Experienced in #AzureDevOps technologies ( Azure SQL Server Database, COSMOS DB, Signal R, Redis Cache, ADX, App Gateway, Azure APIM, WAF, Key Vault, Key cloak, Kafka, Kubernetes, Cassandra, Azure PaaS, EventHub, Service Bus, AKS, Kerberos, KQL, Databricks, Azure DevOps, Jenkins, Spinnaker, Dynatrace, ELK, App Insights, Log Analytics). Enterprise experience with SQL or NoSQL databases. Good understanding of DevOps concepts leveraging CICD tools such as Jenkins, Bamboo. Knowledge of commonly used IDEs like Eclipse, IntelliJ, etc. and UNIX scripts. Desire to learn new technologies in a dynamic, fast paced environment. Associate should have knowledge on Incident management/ Problem management/ Change management. Associate should have the environments/infrastructure knowledge. Good commanding communication skill is a must. Associates should have the application SDLC knowledge. Associate should have worked on the onsite/offshore model. Associates should be flexible for night calls/weekend support. Associate should have knowledge on the change management/ release management process. Excellent communication skills and participates actively in team discussions. Need to represent in the Client calls to update the status. Need to coordinate with multiple technical/business teams.
Posted 1 week ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID - Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect? We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you? Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out? Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_299578 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Description Technocratic Solutions is a trusted and renowned provider of technical resources on a contract basis, serving businesses globally. With a dedicated team of developers, we deliver top-notch software solutions in cutting-edge technologies such as PHP, Java, JavaScript, Drupal, QA, Blockchain AI, and more. Our mission is to empower businesses worldwide by offering high-quality technical resources that meet project requirements and objectives. We prioritize exceptional customer service and satisfaction, delivering our services quickly, efficiently, and cost-effectively. Join us and experience the difference of working with a reliable partner driven by excellence and focused on your success. Job Title: AI/ML Engineer – Generative AI, Databricks, R Programming Location: Delhi NCR / Pune Experience Level: 5 years Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with hands-on experience in Generative AI, Databricks, and R programming to join our advanced analytics team. The ideal candidate will be responsible for designing, building, and deploying intelligent solutions that drive innovation, automation, and insight generation using modern AI/ML technologies. --- Key Responsibilities: Develop and deploy scalable ML and Generative AI models using Databricks (Spark-based architecture). Build pipelines for data ingestion, transformation, and model training/inference on Databricks. Implement and fine-tune Generative AI models (e.g., LLMs, diffusion models) for various use cases like content generation, summarization, and simulation. Leverage R for advanced statistical modeling, data visualization, and integration with ML pipelines. Collaborate with data scientists, data engineers, and product teams to translate business needs into technical solutions. Ensure reproducibility, performance, and governance of AI/ML models. Stay updated with the latest trends and technologies in AI/ML and GenAI and apply them where applicable. --- Required Skills & Qualifications: Bachelor's/Master’s degree in Computer Science, Data Science, Statistics, or a related field. 5 years of hands-on experience in Machine Learning/AI, with at least 2 year in Generative AI. Proficiency in Databricks, including Spark MLlib, Delta Lake, and MLflow. Strong command of R programming, especially for statistical modeling and data visualization (ggplot2, dplyr, caret, etc.). Experience with LLMs, transformers (HuggingFace, LangChain, etc.), and other GenAI frameworks. Familiarity with Python, SQL, and cloud platforms (AWS/Azure/GCP) is a plus. Excellent problem-solving, communication, and collaboration skills. Preferred: Certifications in Databricks, ML/AI (e.g., Azure/AWS ML), or R. Experience in regulated industries (finance, healthcare, etc.). Exposure to MLOps, CI/CD for ML, and version control (Git). --- What We Offer: Competitive salary and benefits Flexible work environment Opportunities for growth and learning in cutting-edge AI/ML Collaborative and innovative team culture --- Would you like this tailored to a specific company, industry, or seniority level (e.g., Lead, Junior, Consultant)? Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Please apply only if you have 3 years or more of relevant experience in Data Science, excluding internship Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do Analyze structured and unstructured datasets to identify actionable insights and trends. Develop and deploy machine learning models and statistical solutions for business challenges. Collaborate with data engineers to design scalable data pipelines and architectures. Translate complex data findings into clear, actionable recommendations for stakeholders. Contribute to building data-driven tools and dashboards for clients. Stay updated on emerging trends in AI/ML and apply learnings to projects. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, or a related field. Technical Expertise: 3-5 years of hands-on experience in data science, machine learning, or analytics. Proficiency in Python/R and SQL for data analysis and modeling. Familiarity with ML frameworks (e.g., Scikit-learn, TensorFlow) and cloud platforms (AWS/GCP/Azure). Experience with data manipulation tools (Pandas, NumPy) and visualization tools (Tableau, Power BI). Basic understanding of deploying models and working with large-scale data platforms. Extra Skills: Strong problem-solving mindset and ability to thrive in agile environments. Excellent communication skills to convey technical concepts to non-technical audiences. Collaborative spirit, with experience working in cross-functional teams. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and careers growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: Data Scientist. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment. Show more Show less
Posted 1 week ago
2.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Details Total Years of Experience: 2-9 Years Primary Technologies: SQL, Power BI, Excel, Python, Addnl: Azure Synapse, Databricks, Spark, Warehouse Architecture & Development The Business Intelligence (BI) Engineer is responsible for assisting the specified Human Resource team in the continuous management of all relevant analytics. This position will collect and analyze the data to measure the impact of initiatives to support strategic business decision making. This position is responsible for working with developers to provide the business at all levels with relevant, intuitive, insight-driven information that is directly actionable. The Business Intelligence Engineer will become closely integrated with business and build a strong relationship with business leaders. This position will work with multi-national teams in an Agile framework and design and implement actionable reports and dashboards, assisting in designing the broader information landscape available to the business. Primary Job Functions Collaborate directly with the business teams to understand performance drivers and trends in their area, provide insights, make recommendations and interpret new data and results. Design reports and dashboards for consumption by the business; oversee the development for production. Perform pro forma modeling and ad hoc analyses. Keep up to date on the best visualization practices and dashboard designs. Maintain standardized templates for reports and dashboards. Ensure standardization and consistency of reporting. Perform deep dive analyses into specific issues as needed. Define data needs and sources; evaluate data quality and work with data services team to extract, transform and load data for analytic discovery projects. Ensure BI tools are fully leveraged to provide the insights needed to drive performance. Interface closely with technology partners to manage analytical environment and acquire data sets. Utilize statistical and data visualization packages to develop innovative approaches to complex business problems. Analyze and communicate the effectiveness of new initiatives; draw insights and make performance improvement recommendations based upon the data sources. Use quantitative and qualitative methodologies to draw insights and support the continuous improvement of the business. Analyze initiatives and events utilizing transaction-level data. Ensure that appropriate data-driven reports and customer behavior insight continuously flow to management to help improve quality, reduce cost, enhance the guest experience, and deliver continued growth. Required Qualifications Proficient in working in Microsoft Azure services and/or other cloud computing environment Experience with Database Management Systems (DBMS), specifically SQL and NoSQL. Knowledge of an enterprise data visualization platform, such as Power BI, Big Query Advanced analytical and problem-solving skills Strong applied Algebraic skills Working knowledge of business statistical application and econometrics Project management skills Ability to digest business problems and translate needs into a data-centric context Ability to synthesize and analyze large sets of data to yield actionable findings Strong attention to detail Excellent verbal and written communication skills Handle multiple projects simultaneously within established time constraints Perform under strong demands in a fast-paced environment Work professionally with customers and co-workers to efficiently serve our customers, treating both with enthusiasm and respect If you feel you have the necessary skill sets and are passionate about the job, please send your profile to vthulasiram@ashleyfurnitureindia.com Show more Show less
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Raipur, Chhattisgarh
On-site
Company Name- Interbiz Consulting Pvt Ltd Position/Designation- Data Engineer Job Location- Raipur (C.G.) Mode- Work from office Experience- 2 to 5 Years We are seeking a talented and detail-oriented Data Engineer to join our growing Data & Analytics team. You will be responsible for building and maintaining robust, scalable data pipelines and infrastructure to support data-driven decision-making across the organization. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory , Databricks , or Apache Spark . Work with Azure Blob Storage , Data Lake , and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka , Apache Flink , or Apache Beam . Build and schedule jobs using orchestration tools like Apache Airflow or Dagster . Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake . Manage data cataloging and lineage using tools like Marquez or Collibra . Collaborate with DevOps teams to containerize solutions using Docker , manage infrastructure with Terraform , and deploy on Kubernetes . Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills and Qualifications Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. [1–5+] years of experience in data engineering or related roles. Proficiency in Python , with strong knowledge of OOP and data structures & algorithms . Comfortable working in Linux environments for development and deployment. Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka , Flink , or Beam . Hands-on experience with Airflow , Dagster , or similar orchestration tools. Deep experience with Microsoft Azure , especially Azure Data Factory , Blob Storage , Synapse , Azure Functions , etc. AZ-900 or other Azure certifications are a plus. Knowledge of dimensional modeling , Snowflake , Apache Iceberg , and Delta Lake . Understanding of modern Lakehouse architecture and related best practices. Familiarity with Marquez , Collibra , or other cataloging tools. Experience with Terraform , Docker , Kubernetes , and Jenkins or equivalent CI/CD tools. Proficiency in setting up dashboards and alerts with Prometheus and Grafana . Interested candidates may share their CV on swapna.rani@interbizconsulting.com or visit www.interbizconsulting.com Note:- Immediate joiner will be preferred. Job Type: Full-time Pay: From ₹25,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Supplemental Pay: Yearly bonus Application Question(s): Do you have at least 2 years of work experience in Python? Do you have at least 2 years of work experience in Data Science? Are you from Raipur, Chhattisgarh? Are you willing to work for more than 2 years? What is your notice period? What is your current salary and what you are expecting? Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
AI/ML Engineer – Senior Consultant AI Engineering Group is part of Data Science & AI Competency Center and is focusing technical and engineering aspects of DS/ML/AI solutions. We are looking for experienced AI/ML Engineers to join our team to help us bring AI/ML solutions into production, automate processes, and define reusable best practices and accelerators. Duties description: The person we are looking for will become part of DataScience and AI Competency Center working in AI Engineering team. The key duties are: Building high-performing, scalable, enterprise-grade ML/AI applications in cloud environment Working with Data Science, Data Engineering and Cloud teams to implement Machine Learning models into production Practical and innovative implementations of ML/AI automation, for scale and efficiency Design, delivery and management of industrialized processing pipelines Defining and implementing best practices in ML models life cycle and ML operations Implementing AI/MLOps frameworks and supporting Data Science teams in best practices Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations Gathering technical requirements & estimating planned work Presenting solutions, concepts and results to internal and external clients Being Technical Leader on ML projects, defining task, guidelines and evaluating results Creating technical documentation Supporting and growing junior engineers Must have skills: Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures Good understanding of Cloud concepts and architectures as well as working knowledge with selected cloud services, preferably GCP Experience in programming ML algorithms and data processing pipelines using Python At least 6-8 years of experience in production ready code development Experience in designing and implementing data pipelines Practical experience with implementing ML solutions on GCP Vertex.AI and/or Databricks Good communication skills Ability to work in team and support others Taking responsibility for tasks and deliverables Great problem-solving skills and critical thinking Fluency in written and spoken English. Nice to have skills & knowledge: Practical experience with other programming languages: PySpark, Scala, R, Java Practical experience with tools like AirFlow, ADF or Kubeflow Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab or Azure DevOps) Experience in applying and/or defining software engineering best practices Experience productization ML solutions using technologies like Docker/Kubernetes We Offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Internal Gallup Certified Strengths Coach to support your growth. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Please click on this link to submit your application: https://system.erecruiter.pl/FormTemplates/RecruitmentForm.aspx?WebID=ac709bd295cc4008af7d0a7a0e465818 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About Us: Looking to hire a Java Full Stack Developer with cloud experience in Hyderabad location. Work Mode: Hybrid (10 days a month) Shifts : General shifts Primary Responsibilities Positions in this function are predominantly involved in developing business solutions by creating new and modifying existing software applications Primary contributor in designing, coding, testing, debugging, documenting and supporting all types of applications consistent with established specifications and business requirements to deliver business value. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method Cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps Analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Required Qualifications Graduate degree or equivalent experience B.E/B.Tech / MCA / Msc / MTech Technical experience: Java, J2EE, ReactJS/Angular Solid experience in Core Java, Spring and Hibernate/Spring Data JPA Working experience on AWS/GCP/Azure cloud Working experience on Data analytics - Azure Databricks, PowerBI, Synopse etc. Hands-on experience of RDBMS like SQL Server, Oracle, MySQL, PostgreSQL Hands-on with Core Java/ J2ee (Spring, Hibernate, MVC) Hands-on with SQL queries and MySQL experience Testing experience in JUnit/Spock/Groovy Experience in SOA based architecture, Web Services (Apache CXF/JAXWS/JAXRS/SOAP/REST) Experience in multiple application and web servers (JBoss/Tomcat/Websphere) Experience in continuous integration (Jenkins/Sonar/Nexus/PMD) Experience in using profiler tools (JProfiler/JMeter) Good working knowledge in SPRING Framework Good understanding of UML and design patterns Good understanding on Performance tuning Good in development of applications using Spring Core, Spring JDBC, Rest Web services & MySQL DB Thorough understanding of Object Oriented Analysis and Design (OOAD) concepts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Java Developer Job Type: Contract (6 Months) Location: Pune YOE -6+ Years Role Overview We are seeking a skilled Java Developer for a 6-month contract role based in Pune. The ideal candidate will have strong hands-on experience in Java-based enterprise application development and a solid understanding of cloud technologies. Key Responsibilities Analyze customer/internal requirements and translate them into software design documents; present RFCs to the architecture team Write clean, high-quality, maintainable code based on approved designs Conduct thorough unit and system-level testing to ensure software reliability Collaborate with cross-functional teams to analyze, design, and deliver applications Ensure optimal performance, scalability, and responsiveness of applications Take technical ownership of assigned features Provide mentorship and support to team members for resolving technical and functional issues Review and approve peer code through pull requests Must-Have Skills Frameworks/Technologies: Spring Boot, Spring AOP, Spring MVC, Hibernate, Play, REST APIs, Microservices Programming Languages: Core Java, Java 8 (streams, lambdas, fluent-style programming), J2EE Database: Strong SQL skills with the ability to write complex queries DevOps: Hands-on experience with CI/CD pipelines Cloud: Solid understanding of AWS services such as S3, Lambda, SNS, SQS, IAM Roles, Kinesis, EMR, Databricks Coding Practices: Scalable and maintainable code development; experience in cloud-native application development Nice-to-Have Skills Additional Languages/Frameworks: Golang, React, OAuth, SCIM Databases: NoSQL, Redshift AWS Tools: KMS, CloudWatch, Caching, Notification Services, Queues Candidate Requirements Proven experience in core application development Strong communication and interpersonal skills Proactive attitude with a willingness to learn new technologies and products Collaborative team player with a growth mindset Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Software Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: • Develop, test, and maintain scalable backend components and microservices using Python and PySpark. • Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. • Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. • Integrate machine learning models into production-grade backend systems powering innovative AI features. • Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. • Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. • Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: • Proficient in Python for backend development with strong coding standards. • Practical experience with Databricks and PySpark in live production environments. • Advanced knowledge of MySQL database design, query optimization, and maintenance. • Solid foundation in machine learning concepts and deploying ML models in backend systems. • Experience utilizing Redis for effective caching and state management. • Outstanding written and verbal communication abilities with strong attention to detail. • Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: • Background in high-growth AI/ML or complex data engineering projects. • Familiarity with additional backend technologies or cloud-based platforms. • Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Sr. Data Engineer Location: Office-Based (Ahmedabad, India) About Hitech Hitech is a leading provider of Data, Engineering Services, and Business Process Solutions. With robust delivery centers in India and global sales offices in the USA, UK, and the Netherlands, we enable digital transformation for clients across industries including Manufacturing, Real Estate, and e-Commerce. Our Data Solutions practice integrates automation, digitalization, and outsourcing to deliver measurable business outcomes. We are expanding our engineering team and looking for an experienced Sr. Data Engineer to design scalable data pipelines, support ML model deployment, and enable insight-driven decisions. Position Summary We are seeking a Data Engineer / Lead Data Engineer with deep experience in data architecture, ETL pipelines, and advanced analytics support. This role is crucial for designing robust pipelines to process structured and unstructured data, integrate ML models, and ensure data reliability. The ideal candidate will be proficient in Python, R, SQL, and cloud-based tools, and possess hands-on experience in creating end-to-end data engineering solutions that support data science and analytics teams. Key Responsibilities Design and optimize data pipelines to ingest, transform, and load data from diverse sources. Build programmatic ETL pipelines using SQL and related platforms. Understand complex data structures and perform data transformation effectively. Develop and support ML models such as Random Forest, SVM, Clustering, Regression, etc. Create and manage scalable, secure data warehouses and data lakes. Collaborate with data scientists to structure data for analysis and modeling. Define solution architecture for layered data stacks ensuring high data quality. Develop design artifacts including data flow diagrams, models, and functional documents. Work with technologies such as Python, R, SQL, MS Office, and SageMaker. Conduct data profiling, sampling, and testing to ensure reliability. Collaborate with business stakeholders to identify and address data use cases. Qualifications & Experience 4 to 6 years of experience in data engineering, ETL development, or database administration. Bachelor’s degree in Mathematics, Computer Science, or Engineering (B.Tech/B.E.). Postgraduate qualification in Data Science or related discipline preferred. Strong proficiency in Python, SQL, Advanced MS Office tools, and R. Familiarity with ML concepts and integrating models into pipelines. Experience with NoSQL systems like MongoDB, Cassandra, or HBase. Knowledge of Snowflake, Databricks, and other cloud-based data tools. ETL tool experience and understanding of data integration best practices. Data modeling skills for relational and NoSQL databases. Knowledge of Hadoop, Spark, and scalable data processing frameworks. Experience with SciKit, TensorFlow, Pytorch, GPT, PySpark, etc. Ability to build web scrapers and collect data from APIs. Experience with Airflow or similar tools for pipeline automation. Strong SQL performance tuning skills in large-scale environments. What We Offer Competitive compensation package based on skills and experience. Opportunity to work with international clients and contribute to high-impact data projects. Continuous learning and professional growth within a tech-forward organization. Collaborative and inclusive work environment. If you're passionate about building data-driven infrastructure to fuel analytics and AI applications, we look forward to connecting with you. Anand Soni Hitech Digital Solutions Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Backend Developer - Python Job Type: Full-time Location: On-site, Hyderabad, Telangana, India Job Summary: Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. You’ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment. Key Responsibilities: Develop, test, and deploy robust backend components and microservices using Python and PySpark. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability. Integrate machine learning models into production-ready backend systems supporting AI-driven features. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing. Required Skills and Qualifications: Proficiency in Python programming for backend development. Hands-on experience with Databricks and PySpark in a production environment. Strong understanding of MySQL database design, querying, and performance tuning. Practical background in machine learning concepts and deploying ML models. Experience with Redis for caching and state management. Excellent written and verbal communication skills, with a keen attention to detail. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad. Preferred Qualifications: Previous experience in high-growth AI/ML or data engineering projects. Familiarity with additional backend technologies or cloud platforms. Demonstrated leadership or mentorship in technical teams. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Technical Project Manager Hyderabad+ Contract Experience : 10+ years Experience & Domain Expertise Data & Analytics: Proven experience delivering metrics, dashboards, and reporting solutions in a data-driven environment. Cybersecurity & Technology Domains: Familiarity with cyber controls , software vulnerabilities , and technology risk management . Cloud & Data Platforms: Hands-on experience with Azure Cloud, Databricks, and Power BI for data processing and visualization. 2. Project & Delivery Management Agile & Scrum Leadership: Lead sprint planning , daily stand-ups , and drive Agile best practices within a globally distributed team. End-to-End Project Execution: Manage projects spanning 1-2 years , breaking them into milestones and deliverables for successful execution. Dependency & Risk Management: Identify cross-team dependencies , resolve blockers, and mitigate risks proactively. Stakeholder Engagement: Collaborate with 30+ stakeholders across 8+ data domains , ensuring alignment and managing expectations. Change & Release Management: Handle 10-20 small changes per month , integrating them into quarterly release plans . 3. Communication & Leadership Requirement Gathering: Act as a bridge between business and technical teams, translating business needs into actionable technical requirements . Cross-functional Coordination: Work with teams across UK, India, and Poland , ensuring seamless collaboration. Meeting Facilitation: Lead project working groups , track progress, and drive decision-making. Team Leadership: Foster a motivated and high-performing team culture within a globally distributed setup. 4. Tools & Technology Cloud & Data Engineering: Azure Cloud, Databricks Reporting & Visualization: Power BI Project Management & Agile Tools: JIRA, Confluence, or similar Interested candidates share your updated CV to mounika@tekgence.com Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop and optimize data pipelines to enhance data processing efficiency. - Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Azure Data Engineer with Palantir Foundry Expertise Location: Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience Level: 7+ years (Data Engineering), 2+ years (Palantir Foundry) Role Overview: We are looking for a highly skilled Azure Data Engineer with hands-on expertise in Palantir Foundry to support critical data integration and application development initiatives. The ideal candidate will have a strong foundation in Python, SQL, PySpark , and Azure services, along with proven experience in working across data pipelines, ontologies, and security configurations within the Palantir ecosystem. This role requires both technical acumen and strong communication skills to engage with cross-functional stakeholders, especially in the Oil & Gas engineering context. Key Responsibilities: Azure Data Engineering: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Databricks , SQL , and PySpark . Ensure data quality, integrity, and governance in Azure-based data platforms. Collaborate with Product Managers and Engineering teams to support business needs using data-driven insights. Palantir Foundry Engineering: Data Integration: Build and manage pipelines; perform Python-based transformations; integrate varied source systems using code, repositories, and connections. Model Integration: Work with business logic, templated analyses, and report models to operationalize analytics. Ontology Management: Define object types, relationships, permissions, object views, and custom functions. Application Development: Build and manage Foundry applications using Workshop , Writeback , Advanced Actions , and interface customization. Security & Governance: Implement data foundation principles; manage access control, restricted views, and ensure data protection compliance. Perform ingestion, transformation, and validation within Palantir and maintain seamless integration with Azure services. Mandatory Technical Skills: Strong proficiency in Python , SQL , and PySpark Expert in Azure Databricks , Azure Data Factory , Azure Data Lake Palantir Foundry hands-on experience , with ability to demonstrate skills during interviews Palantir-specific capabilities: Foundry Certifications : Data Engineering & Foundational Pipeline Builder , Ontology Manager , Object Explorer Mesa language (Palantir?s proprietary language) Time Series Data handling Working knowledge of Equipment & Sensor data in the Oil & Gas domain Soft Skills: Strong communication and interpersonal skills Ability to work independently and drive conversations with Product Managers and Engineers Comfortable acting as a voice of authority in cross-functional technical discussions Proven ability to operate and support complex data platforms in a production environment Nice to Have: Experience working with AI/ML models integrated in Foundry Exposure to AIP (Azure Information Protection) or related security tools Experience in Operating & Support functions across hybrid Azure-Palantir environments Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture and data models. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data lake architectures. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and data governance practices. - Experience with cloud platforms such as AWS or Azure. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Budaun Sadar, Uttar Pradesh, India
On-site
Job Title: Azure Data Engineer with Palantir Foundry Expertise Location: Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience Level: 7+ years (Data Engineering), 2+ years (Palantir Foundry) Role Overview: We are looking for a highly skilled Azure Data Engineer with hands-on expertise in Palantir Foundry to support critical data integration and application development initiatives. The ideal candidate will have a strong foundation in Python, SQL, PySpark , and Azure services, along with proven experience in working across data pipelines, ontologies, and security configurations within the Palantir ecosystem. This role requires both technical acumen and strong communication skills to engage with cross-functional stakeholders, especially in the Oil & Gas engineering context. Key Responsibilities: Azure Data Engineering: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Databricks , SQL , and PySpark . Ensure data quality, integrity, and governance in Azure-based data platforms. Collaborate with Product Managers and Engineering teams to support business needs using data-driven insights. Palantir Foundry Engineering: Data Integration: Build and manage pipelines; perform Python-based transformations; integrate varied source systems using code, repositories, and connections. Model Integration: Work with business logic, templated analyses, and report models to operationalize analytics. Ontology Management: Define object types, relationships, permissions, object views, and custom functions. Application Development: Build and manage Foundry applications using Workshop , Writeback , Advanced Actions , and interface customization. Security & Governance: Implement data foundation principles; manage access control, restricted views, and ensure data protection compliance. Perform ingestion, transformation, and validation within Palantir and maintain seamless integration with Azure services. Mandatory Technical Skills: Strong proficiency in Python , SQL , and PySpark Expert in Azure Databricks , Azure Data Factory , Azure Data Lake Palantir Foundry hands-on experience , with ability to demonstrate skills during interviews Palantir-specific capabilities: Foundry Certifications : Data Engineering & Foundational Pipeline Builder , Ontology Manager , Object Explorer Mesa language (Palantir?s proprietary language) Time Series Data handling Working knowledge of Equipment & Sensor data in the Oil & Gas domain Soft Skills: Strong communication and interpersonal skills Ability to work independently and drive conversations with Product Managers and Engineers Comfortable acting as a voice of authority in cross-functional technical discussions Proven ability to operate and support complex data platforms in a production environment Nice to Have: Experience working with AI/ML models integrated in Foundry Exposure to AIP (Azure Information Protection) or related security tools Experience in Operating & Support functions across hybrid Azure-Palantir environments Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline development and management. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud data storage solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Architect, you will provide functional and/or technical expertise to plan, analyze, define, and support the delivery of future functional and technical capabilities for an application or group of applications. You will also assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the design and implementation of application solutions. - Ensure compliance with architectural standards and guidelines. - Identify opportunities to improve application performance and scalability. - Mentor junior team members to enhance their skills. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of cloud-based data analytics solutions. - Experience in designing and implementing scalable data architectures. - Knowledge of data governance and security best practices. - Hands-on experience with data integration and ETL processes. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance standards. - Ability to troubleshoot and optimize data workflows for performance. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane