Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
SE/ Senior Data Engineer (with SQL, Python, Airflow, Bash) About The Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud . Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. We are specifically looking for someone with strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow. Build and automate data workflows and orchestration pipelines; knowledge of Airflow or equivalent tools is a plus. Write and maintain Bash scripts for automating system tasks and managing data jobs. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications Bachelors or Masters degree in Computer Science, Information Systems, or related field. 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proficiency in SQL and Python for data processing and pipeline development. Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash. Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making.
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
The Engineering Lead Analyst is a strategic professional who stays abreast of developments within own field and contributes to directional strategy by considering their application in own job and the business. Recognized technical authority for an area within the business. This position is for the lead role in Client Financials Improvements project. Selected candidate will be responsible for development and execution of project within ISG Data Platform group. The successful candidate will be working closely with the global team, to interface the business, translating business requirements into technical requirements and will have strong functional knowledge from banking and financial system. Lead the definition and ongoing management of target application architecture for Client Financials. Leverage internal and external leading practices and liaising with other Citi risk organizations to determine and maintain appropriate alignment, specifically with Citi Data Standards. Establish a governance process to oversee implementation activities and ensure ongoing alignment to the defined architecture. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 12-16 years experience in analyzing and defining risk management data structures. Skills: - Strong working experience in Python & PySpark. - Prior working experience in writing APIs / MicroServices development. - Hands-on experience of writing SQL queries in multiple database environments and OS; Experience in validating end to end flow of data in an application. - Hands on experience in working with SQL and NoSQL databases. - Working experience with Airflow and other Orchestrator. - Experience in Design and Architect of application. - Assess the list of packaged applications and define the re-packaging approach. - Understanding of Capital markets (risk management process), Loans / CRMS required. - Knowledge of process automation and engineering will be plus. - Demonstrated influencing, facilitation and partnering skills. - Track record of interfacing with and presenting results to senior management. - Experience with all phases of Software Development Life Cycle. - Strong stakeholder engagement skills. - Organize and attend workshops to understand the current state of Client Financials. - Proven aptitude for organizing and prioritizing work effectively (Must be able to meet deadlines). - Propose a solution and deployment approach to achieve the goals. Citi is an equal opportunity and affirmative action employer. Citigroup Inc. and its subsidiaries ("Citi) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View the "EEO is the Law" poster. View the EEO is the Law Supplement. View the EEO Policy Statement.,
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
Remote
Svitla Systems Inc. is looking for a Data Engineer (with ML/AI Experience) for a full-time position (40 hours per week) in India . Our client is the world’s largest travel guidance platform, helping hundreds of millions each month become better travelers, from planning to booking to taking a trip. Travelers across the globe use the site and app to discover where to stay, what to do, and where to eat based on guidance from those who have been there before. With more than 1 billion reviews and opinions from nearly 8 million businesses, travelers turn to clients to find deals on accommodations, book experiences, and reserve tables at delicious restaurants. They discover great places nearby as a travel guide company, available in 43 markets and 22 languages. As a member of the Data Platform Enterprise Services Team, you will collaborate with engineering and business stakeholders to build, optimize, maintain, and secure the full data vertical, including tracking instrumentation, information architecture, ETL pipelines, and tooling that provide key analytics insights for business-critical decisions at the highest levels of product, finance, sales, CRM, marketing, data science, and more, all in a dynamic environment of continuously modernizing tech stack including highly scalable architecture, cloud-based infrastructure, and real-time responsiveness. Requirements BS/MS in Computer Science or related field. 4+ years of experience in data engineering or software development. Experience with AI models, LLM. Proven data design and modeling with large datasets (star/snowflake schema, SCDs, etc.). Strong SQL skills and ability to query large datasets. Experience with modern cloud data warehouses: Snowflake, BigQuery, etc. ETL development experience: SLA, performance, and monitoring. Familiarity with BI tools and semantic layer principles (e.g., Looker, Tableau). Understanding of CI/CD, testing, documentation practices. Comfortable in a fast-paced, dynamic environment. Ability to collaborate cross-functionally and communicate with technical/non-technical peers. Strong data investigation and problem-solving abilities. Comfortable with ambiguity and focused on clean, maintainable data architecture. Detail-oriented with a strong sense of ownership. Nice to Have Experience with data governance, data transformation tools. Prior work with e-commerce platforms. Experience with Airflow, Dagster, Monte Carlo, or Knowledge Graphs. Responsibilities Collaborate with stakeholders from multiple teams to collect business requirements and translate them into technical data model solutions. Design, build, and maintain efficient, scalable, and reusable data models in cloud data warehouses (e.g., Snowflake, BigQuery). Transform data from many sources into clean, curated, standardized, and trustworthy data products. Build data pipelines and ETL processes handling terabytes of data. Analyze data using SQL and dashboards; ensure models align with business needs. Ensure data quality through testing, observability tools, and monitoring. Troubleshoot complex data issues, validate assumptions, and trace anomalies. Participate in code reviews and help improve data development standards. We offer US and EU projects based on advanced technologies. Competitive compensation based on skills and experience. Annual performance appraisals. Remote-friendly culture and no micromanagement. Personalized learning program tailored to your interests and skill development. Bonuses for article writing, public talks, other activities. 15 PTO days, 10 national holidays. Free webinars, meetups and conferences organized by Svitla. Fun corporate celebrations and activities. Awesome team, friendly and supportive community!
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
Genpact is a global professional services and solutions firm committed to delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are fueled by curiosity, agility, and a drive to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, and we serve leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are looking for a Senior Principal Consultant, Senior Data Architect to join our team! Responsibilities: - Manage programs and ensure the integration and implementation of program elements according to the agreed schedule with quality deliverables. - Lead and guide the development team on ETL architecture. - Collaborate closely with customer architects, business, and technical stakeholders to build trust and establish credibility. - Provide insights on customer direction to guide them towards optimal outcomes. - Address client technical issues, articulate understanding, and offer solutions in AWS Cloud and Big Data Domains. - Build the infrastructure for efficient extraction, transformation, and loading of data from various sources using SQL and AWS "Big data" technologies. - Analyze the existing technology landscape and current application workloads. - Design and architect solutions with scalability, operational completion, and elasticity in mind. - Hands-on experience in building Java applications. - Optimize Spark applications running on Hadoop EMR clusters for performance. - Develop architecture blueprints, detailed documentation, and bill of materials, including required Cloud Services. - Collaborate with various teams to drive business growth and customer success. - Lead strategic pre-sales engagements with larger, more complex customers. - Engage and communicate proactively to align internal and external customer expectations. - Drive key strategic opportunities with top customers in partnership with sales and delivery teams. - Maintain customer relationships through proactive pre-sales engagements. - Lead workshops to identify customer needs and challenges. - Create and present services responses, proposals, and roadmaps to meet customer objectives. - Lead presales, solutioning, estimations, and POC preparation. - Mentor team members and build reusable solution frameworks and components. - Head complex ETL requirements, design, and implementation. - Ensure client satisfaction with product by developing architectural requirements. - Develop project plans, identify resource requirements, and assure code quality. - Shape and enhance ETL architecture, recommend improvements, and resolve design issues. Qualifications: Minimum qualifications: - Engineering Degree or equivalent. - Relevant work experience. - Hands-on experience in ETL/BI tools like Talend, SSIS, Abinitio, Informatica. - Experience with Cloud Technologies such as AWS, Databricks, Airflow. Preferred Skills: - Excellent written and verbal communication skills. - Strong analytical and problem-solving skills. - Experience in consulting roles within a technology company. - Ability to articulate technical solutions clearly to different stakeholders. - Team player with the ability to collaborate effectively. - Willingness to travel occasionally. If you are looking to join a dynamic team and contribute to innovative solutions in data architecture, this role might be the perfect fit for you. Join us at Genpact and be part of shaping the future of professional services and solutions! Job Details: - Job Title: Senior Principal Consultant, Senior Data Architect - Primary Location: India-Mumbai - Schedule: Full-time - Education Level: Bachelor's / Graduation / Equivalent - Job Posting Date: Oct 7, 2024, 7:51:53 AM - Master Skills List: Digital - Job Category: Full Time,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. The financial services practice at EY offers integrated advisory services to financial institutions and other capital markets participants. Within EY's Advisory Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. We're looking for Senior and Manager Big Data Experts with expertise in the Financial Services domain and hands-on experience with the Big Data ecosystem. Expertise in Data engineering, including design and development of big data platforms. Deep understanding of modern data processing technology stacks such as Spark, HBase, and other Hadoop ecosystem technologies. Development using SCALA is a plus. Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices. Understanding of the theory and application of Continuous Integration/Delivery. Experience with NoSQL technologies and a passion for software craftsmanship. Experience in the Financial industry is a plus. Nice to have skills include understanding and familiarity with all Hadoop Ecosystem components, Hadoop Administrative Fundamentals, experience working with NoSQL in data stores like HBase, Cassandra, MongoDB, HDFS, Hive, Impala, schedulers like Airflow, Nifi, experience in Hadoop clustering, and Auto scaling. Developing standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Defining and developing client-specific best practices around data management within a Hadoop environment on Azure cloud. To qualify for the role, you must have a BE/BTech/MCA/MBA degree, a minimum of 3 years hands-on experience in one or more relevant areas, and a total of 6-10 years of industry experience. Ideally, you'll also have experience in Banking and Capital Markets domains. Skills and attributes for success include using an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates, strong communication, presentation and team building skills, experience in producing high-quality reports, papers, and presentations, and experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment, an opportunity to be a part of a market-leading, multi-disciplinary team of 1400+ professionals, in the only integrated global transaction business worldwide, and opportunities to work with EY Advisory practices globally with leading businesses across a range of industries. Working at EY offers inspiring and meaningful projects, education and coaching alongside practical experience for personal development, support, coaching, and feedback from engaging colleagues, opportunities to develop new skills and progress your career, freedom and flexibility to handle your role in a way that's right for you. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
You will play a crucial role as a Data Engineer, leading the development of data infrastructure at the forefront. Your responsibilities will involve creating and maintaining systems that ensure a seamless flow, availability, and reliability of data. Your key tasks at Coforge will include: - Developing and managing data pipelines to facilitate efficient data extraction, transformation, and loading (ETL) processes. - Designing and enhancing data storage solutions such as data warehouses and data lakes. - Ensuring data quality and integrity by implementing data validation, cleansing, and error handling mechanisms. - Collaborating with data analysts, data architects, and software engineers to comprehend data requirements and provide relevant data sets for business intelligence purposes. - Automating and enhancing data processes and workflows to drive scalability and efficiency. - Staying updated on industry trends and emerging technologies in the field of data engineering. - Documenting data pipelines, processes, and best practices to facilitate knowledge sharing. - Contributing to data governance and compliance initiatives to adhere to regulatory standards. - Working closely with cross-functional teams to promote data-driven decision-making across the organization. Key skills required for this role: - Proficiency in data modeling and database management. - Strong programming capabilities, particularly in Python, SQL, and PL/SQL. - Sound knowledge of Airflow, Snowflake, and DBT. - Hands-on experience with ETL (Extract, Transform, Load) processes. - Familiarity with data warehousing and cloud platforms, especially Azure. Your experience of 5-10 years will be instrumental in successfully fulfilling the responsibilities of this role located in Greater Noida with a shift timing from 2:00 PM IST to 10:30 PM IST.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Python Engineer at our company, you will leverage your deep expertise in data engineering and API development to drive technical excellence and autonomy. Your primary responsibility will be leading the development of scalable backend systems and data infrastructure that power AI-driven applications across our platform. You will design, develop, and maintain high-performance APIs and microservices using Python frameworks such as FastAPI and Flask. Additionally, you will build and optimize scalable data pipelines, ETL/ELT processes, and orchestration frameworks, ensuring the utilization of AI development tools like GitHub Copilot, Cursor, or CodeWhisperer to enhance engineering velocity and code quality. In this role, you will architect resilient and modular backend systems integrated with databases like PostgreSQL, MongoDB, and Elasticsearch. Managing workflows and event-driven architectures using tools such as Airflow, Dagster, or Temporal.io will be essential, as you collaborate with cross-functional teams to deliver production-grade systems in cloud environments (AWS/GCP/Azure) with high test coverage, observability, and reliability. To be successful in this position, you must have at least 5 years of hands-on experience in Python backend/API development, a strong background in data engineering, and proficiency in AI-enhanced development environments like Copilot, Cursor, or equivalent tools. Solid experience with Elasticsearch, PostgreSQL, and scalable data solutions, along with familiarity with Docker, CI/CD, and cloud-native deployment practices is crucial. You should also demonstrate the ability to take ownership of features from idea to production. Nice-to-have qualifications include experience with distributed workflow engines like Temporal.io, background in AI/ML systems (PyTorch or TensorFlow), familiarity with LangChain, LLMs, and vector search tools (e.g., FAISS, Pinecone), and exposure to weak supervision, semantic search, or agentic AI workflows. Join us to build infrastructure for cutting-edge AI products and work in a collaborative, high-caliber engineering environment.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
The Business Analyst position at Piramal Critical Care (PCC) within the IT department in Kurla, Mumbai involves acting as a liaison between PCC system users, software support vendors, and internal IT support teams. The ideal candidate is expected to be a technical contributor and advisor to PCC business users, assisting in defining strategic application development and integration to support business processes effectively. Key stakeholders for this role include internal teams such as Supply Chain, Finance, Infrastructure, PPL Corporate, and Quality, as well as external stakeholders like the MS Support team, 3PLs, and Consultants. The Business Analyst will report to the Chief Manager- IT Business Partner. The ideal candidate should hold a B.S. in Information Technology, Computer Science, or equivalent, with 8-10 years of experience in Data warehousing, BI, Analytics, and ETL tools. Experience in the Pharmaceutical or Medical Device industry is required, along with familiarity with large global Reporting tools like Qlik/Power BI, SQL, Microsoft Power Platform, and other related platforms. Knowledge of computer system validation lifecycle, project management tools, and office tools is also essential. Key responsibilities of the Business Analyst role include defining user and technical requirements, leading implementation of Data Warehousing, Analytics, and ETL systems, managing vendor project teams, maintaining partnerships with business teams, and proposing IT budgets. The candidate will collaborate with IT and business teams, manage ongoing business applications, ensure system security, and present project updates to the IT Steering committee. The successful candidate must possess excellent interpersonal and communication skills, self-motivation, proactive customer service attitude, leadership abilities, and a strong service focus. They should be capable of effectively communicating business needs to technology teams, managing stakeholder expectations, and working collaboratively to achieve results. Piramal Critical Care (PCC) is a subsidiary of Piramal Pharma Limited (PPL) and is a global player in hospital generics, particularly Inhaled Anaesthetics. PCC is committed to delivering critical care solutions globally and maintaining sustainable growth for stakeholders. With a wide presence across the USA, Europe, and over 100 countries, PCC's product portfolio includes Inhalation Anaesthetics and Intrathecal Baclofen therapy. PCC's workforce comprises over 400 employees across 16 countries and is dedicated to expanding its global footprint through new product additions in critical care. Committed to corporate social responsibility, PCC collaborates with partner organizations to provide hope and resources to those in need while caring for the environment.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
Working at Freudenberg: "We will wow your world!" This is our promise. As a global technology group, we not only make the world cleaner, healthier, and more comfortable but also offer our 52,000 employees a networked and diverse environment where everyone can thrive individually. Be surprised and experience your own wow moments. Klber Lubrication, a company of the Freudenberg Group, is the global leader in specialty lubrication with manufacturing operations in North and South America, Europe, and Asia, subsidiaries in more than 30 different countries, and distribution partners in all regions of the world, supported by our HQs in Germany. We are passionate about innovative tribological solutions that help our customers to be successful. We supply products and services, many of them customized, in almost all industries from automotive to the wind energy markets. Some of your Benefits Diversity & Inclusion: We focus on providing an inclusive environment and recognize our diversity contributes to our success. Health Insurance: Rely on comprehensive services whenever you need it. Personal Development: We offer a variety of trainings to ensure you can develop in your career. Safe Environment: We strive to ensure safety remains a top priority and provide a stable environment for our employees. Sustainability & Social Commitment: We support social and sustainable projects and encourage employee involvement. Bangalore On-Site Klber Lubrication India Pvt. Ltd. You support our team as Statistical Analytics Engineer (F/M/D) Responsibilities Performs projects in statistics, data science, and artificial intelligence in the technical field of KL under the guidance of experts, evaluates, and comments on results. Implements data processes and data products into operations and works on their operation and optimization. Coordinates regularly with experts in statistics, data science, and artificial intelligence at KL. Maintains a balance between effort (time, cost) and expected benefit of incoming requests. Contributes ideas for KL-relevant developments and methods, especially in the field of data analysis and artificial intelligence, and implements them as needed. Supports the improvement and automation of processes/workflows in own & adjacent work areas through statistical methods and evaluations. Works on cross-functional project teams and projects with external partners. Keeps own expertise up to date (self-study, external training, meetings, specialist groups). Supports technical colleagues in the introduction, use, and quality assurance of data analysis tools. Presents contributions to statistics and data science in the context of Klber internal and customer training. Qualifications Engineering Graduate /Masters in Chemistry, Physics, or equivalent degree. Overall 7-10 years of experience in data modeling, analysis, and visualization in manufacturing or industrial environment. Completed higher education in science, focus on chemistry, physics, mathematics, or related discipline. Strong experience in methods for processing, evaluating, and modeling data. In-depth programming knowledge with a focus on data science and data engineering, preferably in Python. Experience using cheminformatics, materials informatics, generative AI, and machine learning. Experience in developing interactive data applications with frameworks like Shiny, Dash, or streamlit. Good knowledge of relational data (SQL), experience in tools like JupyterLab, Gitlab, Airflow, MLFlow, knowledge of chemical analysis and organic chemistry.,
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
At SpicyChat, we’re on a mission to build the best uncensored roleplaying agent in the world , and we’re looking for a passionate Data Scientist to join our team. Whether you’re early in your data science career or growing into a mid-senior role, this is a unique opportunity to work hands-on with state-of-the-art LLMs in a fast-paced, supportive environment. Role Overview We’re looking for a Data Scientist (Junior to Mid-Senior level) who will support our LLM projects across the full data pipeline—from building clean datasets and dashboards to fine-tuning models and supporting cross-functional collaboration. You’ll work closely with ML engineers, product teams, and data annotation teams to bring AI solutions to life. What You’ll Be Doing ETL and Data Pipeline Development: Design and implement data extraction, transformation, and loading (ETL) pipelines. Work with structured and unstructured data from various sources. Data Preparation: Clean, label, and organize datasets for training and evaluating LLMs. Collaborate with annotation teams to ensure high data quality. Model Fine-Tuning & Evaluation: Support the fine-tuning of LLMs for specific use cases. Assist in model evaluation, prompt engineering, and error analysis. Dashboarding & Reporting: Create and maintain internal dashboards to track data quality, model performance, and annotation progress. Automate reporting workflows to help stakeholders stay informed. Team Coordination & Collaboration: Communicate effectively with ML engineers, product managers, and data annotators. Ensure that data science deliverables align with product and business goals. Research & Learning: Stay current with developments in LLMs, fine-tuning techniques, and the AI ecosystem. Share insights with the team and suggest improvements based on new findings. Qualifications Required: 1–4 years of experience in a data science, ML, or analytics role. Proficient in Python and data science libraries (Pandas, NumPy, scikit-learn). Experience with SQL and data visualization tools (e.g., Streamlit, Dash, Tableau, or similar). Familiarity with machine learning workflows and working with large datasets. Strong communication and organizational skills. Bonus Points For: Experience fine-tuning or evaluating large language models (e.g., OpenAI, Hugging Face, LLaMA, Mistral, etc.). Knowledge of prompt engineering or generative AI techniques. Exposure to tools like Weights & Biases, Airflow, or cloud platforms (AWS, GCP, Azure). Previous work with cross-functional or remote teams. Why Join NextDay AI? 🌍 Remote-first: Work from anywhere in the world. ⏰ Flexible hours: Create a schedule that fits your life. 🌴 Unlimited leave: Take the time you need to rest and recharge. 🚀 Hands-on with LLMs: Get practical experience with cutting-edge AI systems. 🤝 Collaborative culture: Join a supportive, ambitious team working on real-world impact. 🌟 Mission-driven: A chance to be part of an exciting mission and an amazing team. Ready to join us in creating the ultimate uncensored roleplaying agent? Send us your resume along with some details on your coolest projects. We’re excited to see what you’ve been working on!
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world&aposs leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you&aposll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You&aposll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class. Show more Show less
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform Are you an expert with Big Data Technologies Have you looked under the hood of these systems Are you interested in Open Source If you answered Yes to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description What you’ll do: As a member of the Decision Support Analytics (DSA) team, you will collaborate with cross-functional teams to design, build, and manage scalable data pipelines, data warehouses, and machine learning (ML) models. Your work will involve analyzing and visualizing data, publishing dashboards or data models, and contributing to the development of web services for Engineering Technologies portals and applications. This role requires strong coding abilities, presentation skills, and expertise in big data infrastructure. The ideal candidate will have experience in end-to-end data generation processes, troubleshooting data/reporting issues, and recommending optimal data solutions. A keen attention to detail and proficiency with tools like Tableau and other data analysis platforms are essential. Collaborate with internal stakeholders to gather requirements and understand business workflows. Develop scalable data pipelines and ensure high-quality data flow and integrity. Use advanced coding skills in languages such as SQL, Python, Java, or Scala to address business needs. Leverage statistical methods to analyze data, generate actionable insights, and produce business reports. Design meaningful visualizations using tools like Tableau, Power BI, or similar platforms for effective communication with stakeholders. Implement or upgrade data analysis tools and assist in strategic decisions regarding new systems. Build frameworks and automation tools to streamline data consumption and understanding. Train end-users on new dashboards, reports, or tools. Provide hands-on support for internal customers across various teams. Ensure compliance with data governance policies and security standards. What You Need To Bring Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven track record of working with large datasets in fast-paced environments. Strong problem-solving skills with the ability to adapt to evolving technologies. Typically 8+ years experience. Data Engineering Tools & Frameworks ETL tools such as Wherescape, Apache Airflow, or Azure Data Factory. Big Data technologies like Hadoop, Apache Spark, or Kafka. Cloud Platforms Proficiency in cloud services such as AWS, Azure, or Google Cloud Platform for storage, computing, and analytics. Databases Experience with both relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Data Modeling & Architecture Expertise in designing schemas for analytical use cases and optimizing storage mechanisms. Machine Learning & Automation Familiarity with ML frameworks (e.g., TensorFlow, PyTorch) for building predictive models. Scripting & Automation Advanced scripting for automation using Python/Scala/Java. APIs & Web Services Building RESTful APIs for seamless integration with internal/external systems. Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
6.0 - 10.0 years
20 - 35 Lacs
Pune, Delhi / NCR
Hybrid
Job Description Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Qualifications Bachelors degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language ( Python/Pyspark ). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently
Posted 1 week ago
6.0 - 11.0 years
16 - 31 Lacs
Noida, Pune, Gurugram
Hybrid
Role & responsibilities AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Banking / Financial / Payment Gateway or Similar experience is preferred.
Posted 1 week ago
10.0 years
0 Lacs
Vadodara, Gujarat, India
Remote
The opportunity Identification, exploration and implementation of technology / product development and growth strategies to enhance and secure global positioning. Ensure project execution, foster innovation and develop and secure R&D capabilities in the business for profitability. Devising research methods, focus on future technical direction, future scenarios in transforming technology landscape. Preparing technical specifications and specifying laboratory test equipment and processes. Making recommendations concerning acquisition and use of new technological equipment and materials within budgets and targets set. May participate in intellectual property evaluations and development of patent applications. Coordinating pilot-plant or initial production runs on new products or processes. A Senior Professional (P3) applies advanced knowledge of job area typically obtained through advanced education and work experience. Responsibilities may include: Managing projects / processes, working independently with limited supervision. Coaching and reviewing the work of lower level professionals. Problems faced are difficult and sometimes complex. How You’ll Make An Impact Provide advanced technical leadership in transformer thermal design for R&D initiatives by addressing industrial challenges and remaining abreast of developments through technology reviews. Leverage expertise in relevant technologies, tools, and methodologies to support all phases of R&D projects, including design clarification, simulation, and troubleshooting. Carry out thermal analyses using CFD and FEA to model heat transfer, airflow, and temperature profiles in power transformers, and develop mathematical models and simulation methodologies for accurate performance prediction. Participate in feasibility studies, propose technical solutions, and design products, collaborating with other simulation teams to deliver multi-physics finite element approaches. Communicate results via comprehensive reports, specifications, and educational materials to facilitate knowledge dissemination. Evaluate and enhance production processes, offering expert advice to manufacturing facilities, and support factory operations as needed. Collaborate with global research scientists, technology centers, managers, customers, and academic partners. Lead the development of transformer insulation design tools, from defining technical requirements to supporting software development and validating solution quality. Make informed recommendations for the business, ensuring documentation and sharing of findings. Engage in technology and product development projects to adhere to schedules and budgets, taking on roles such as sub-project leader and aligning activities with R&D goals. Expand professional expertise through active involvement in engineering networks. Contribute to intellectual property activities by participating in IP discussions, preparing clearance reports, and identifying potential risks associated with R&D tasks. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Your background M.tech or Ph.D background is must. PhD in thermal or fluid dynamics is required, along with at least 10 years of hands-on experience in thermal and fluid simulation, particularly using 2D and 3D CFD tools such as Fluent. Proficiency in object-oriented programming (preferably VB.NET) and a strong grasp of thermodynamics, heat transfer, and fluid mechanics are essential. Experience with power transformer thermal design, R&D projects involving transformers, and empirical validation is desirable. Familiarity with CAD modeling software like CREO is an asset. Candidates should excel at collaborating within international and remote teams and possess excellent interpersonal skills. Experience with ACT and APDL programming in Ansys is a plus. A comprehensive understanding of the technical field and robust technical expertise are expected. Willingness to travel to manufacturing sites as needed and strong English communication skills, both written and verbal, are required. Strong verbal and written communication skills in English Hitachi Energy is a global technology leader in electrification, powering a sustainable energy future through innovative power grid technologies with digital at the core. Over three billion people depend on our technologies to power their daily lives. With over a century in pioneering mission-critical technologies like high-voltage, transformers, automation, and power electronics, we are addressing the most urgent energy challenge of our time – balancing soaring electricity demand, while decarbonizing the power system. Headquartered in Switzerland, we employ over 50,000 people in 60 countries and generate revenues of around $16 billion USD. We welcome you to apply today.
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.
Posted 1 week ago
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Atomicwork is on a mission to transform the digital workplace experience by uniting people, processes, and platforms through AI automation Our team is building a modern service management platform that enables growing businesses to reduce operational complexity and drive business success, We are seeking a skilled and motivated Data Pipeline Engineer to join our team In this role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our enterprise search capabilities Your work will ensure that data from various sources is efficiently ingested, processed, and indexed, enabling seamless and secure search experiences across the organisation, This position is based out of our Bengaluru office We offer competitive pay to employees and practical benefits for their whole family, If this sounds interesting to you, read on, What Were Looking For (Qualifications) We value hands-on skills and a proactive mindset Formal qualifications are less important than your ability to deliver results and collaborate effectively, Proficiency in programming languages such as Python, Java, or Scala, Strong experience with data pipeline frameworks and tools ( e-g , Apache Airflow, Apache NiFi), Experience with search platforms like Elasticsearch or OpenSearch, Familiarity with data ingestion, transformation, and indexing processes, Understanding of enterprise search concepts, including crawling, indexing, and query processing, Knowledge of data security and access control best practices, Experience with cloud platforms (AWS, GCP, or Azure) and related Backend Engineer Search/Integrations services, Familiarity with Model Context Protocol (MCP) is a strong plus Strong problem-solving and analytical skills, Excellent communication and collaboration What Youll Do (Responsibilities) Design, develop, and maintain data pipelines for enterprise search applications, Implement data ingestion processes from various sources, including databases, file systems, and APIs, Develop data transformation and enrichment processes to prepare data for indexing, Integrate with search platforms to index and update data efficiently, Ensure data quality, consistency, and integrity throughout the pipeline, Monitor pipeline performance and troubleshoot issues as they arise, Collaborate with cross-functional teams, including data scientists, engineers, and product managers, Implement security measures to protect sensitive data during processing and storage, Document pipeline architecture, processes, and best practices, Stay updated with industry trends and advancements in data engineering and enterprise search, Why we are different (culture) As a part of Atomicwork, you can shape our company and business from idea to production Our cultural values also set the bar high, helping us create a better workplace for everyone, Agency: Be self-directed Take initiative and solve problems creatively, Taste: Hold a high bar Sweat the details Build with care and discernment, Ownership: We demonstrate unwavering commitment to our mission and goals, taking full responsibility for triumphs and setbacks, Mastery: We relentlessly pursue continuous self-improvement as individuals and teams, dedicating ourselves to constant learning and growth, Impatience: We recognize that our world moves swiftly and is driven by an unyielding desire to progress with every endeavor, Customer Obsession: We place our customers at the heart of everything we do, relentlessly seeking to understand their needs and exceed their expectations, What we offer (compensation and benefits) We are big on benefits that make sense to you and your family, Fantastic team ?the #1 reason why everybody joins us ? Convenient offices ? well-located offices spread over five different cities ? Paid time off ? Unlimited sick leaves and 15 days off every year, Health insurance ? comprehensive health coverage upto 75% premium covered ?? Flexible allowances ? with hassle-free reimbursements across spends ?? Annual outings ? for everyone to have fun together, What next (applying for this role) Click on the apply button to get started with your application, Answer a few questions about yourself and your work, Wait to hear from us about the next steps, Do you have anything else to tell usEmail careers@atomicwork and let us know whats on your mind, Show
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 1 week ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France