Home
Jobs

4657 Apache Jobs - Page 38

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

Linkedin logo

Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. The mission of the Platform Product Group engineers is to build a trusted, scalable and compliant platform to operate with speed, efficiency and quality. Our teams build and maintain the platforms critical to the existence of Coinbase. There are many teams that make up this group which include Product Foundations (i.e. Identity, Payment, Risk, Proofing & Regulatory, Finhub), Machine Learning, Customer Experience, and Infrastructure. As a machine learning engineer, you will play a pivotal role in constructing essential infrastructure for the open financial system. This involves harnessing diverse and extensive data sources, including the blockchain, to grant millions of individuals access to cryptocurrency while simultaneously identifying and thwarting malicious entities. Your impact extends beyond safeguarding Coinbase, as you'll have the opportunity to employ machine learning to enhance the overall user experience. This includes imbuing intelligence into recommendations, risk assessment, chatbots, and various other aspects, making our product not only secure but also exceptionally user-friendly. What you’ll be doing (ie. job duties): Investigate and harness cutting-edge machine learning methodologies, including deep learning, large language models (LLMs), and graph neural networks, to address diverse challenges throughout the company. These challenges encompass areas such as fraud detection, feed ranking, recommendation systems, targeting, chatbots, and blockchain mining. Develop and deploy robust, low-maintenance applied machine learning solutions in a production environment. Create onboarding codelabs, tools, and infrastructure to democratize access to machine learning resources across Coinbase, fostering a culture of widespread ML utilization. What we look for in you (ie. job requirements): 5+yrs of industry experience as a machine learning and software engineer Experience building backend systems at scale with a focus on data processing/machine learning/analytics. Experience with at least one ML model: LLMs, GNN, Deep Learning, Logistic Regression, Gradient Boosting trees, etc. Working knowledge in one or more of the following: data mining, information retrieval, advanced statistics or natural language processing, computer vision. Exhibit our core cultural values: add positive energy, communicate clearly, be curious, and be a builder. Nice to haves: BS, MS, PhD degree in Computer Science, Machine Learning, Data Mining, Statistics, or related technical field. Knowledge of Apache Airflow, Spark, Flink, Kafka/Kinesis, Snowflake, Hadoop, Hive. Experience with Python. Experience with model interpretability, responsible AI. Experience with data analysis and visualization. Job #: GPML05IN *Answers to crypto-related questions may be used to evaluate your onchain experience. Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Summary about Organization A career in our Advisory Acceleration Center is the natural extension of PwC’s leading global delivery capabilities. The team consists of highly skilled resources that can assist in the areas of helping clients transform their business by adopting technology using bespoke strategy, operating model, processes, and planning. You will be at the forefront of helping organizations adopt innovative technology solutions that optimize business processes or enable scalable technology. Our team helps organizations transform their IT infrastructure, modernize applications and data management to help shape the future of business. An essential and strategic part of Advisory's multi-sourced, multi-geography Global Delivery Model, the Acceleration Centers are a dynamic, rapidly growing component of our business. The teams out of these Centers have achieved remarkable results in process quality and delivery capability, resulting in a loyal customer base and a reputation for excellence. Job Description As a Senior Data Governance Engineer, you will play a crucial role in the development and implementation of our data governance architecture & strategy. You will work closely with cross functional teams to ensure the integrity, quality, and security of our data assets. Your expertise in various Data Governance tools and custom implementations will be pivotal in driving our data governance initiatives forward. Key areas of expertise include Implement end to end data governance in medium to large sized data projects. Implement, configure, and maintain Data Governance tools such as Collibra, Apache Atlas, Microsoft PurView, BigID Evaluate and recommend appropriate DG tools and technologies based on business requirements. Define, implement, and monitor data quality rules and standards. Collaborate with data stewards, IT, legal, and business units to establish data governance processes. Provide guidance and support to data stewards. Work with business units to define, develop, and maintain business glossaries Ensure compliance with regulatory requirements and internal data governance frameworks. Collaborate with IT, data management teams, and business units to ensure alignment of data governance objectives. Communicate data governance initiatives and policies effectively across the organization. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Management, or a related field. 8 - 10 years of experience in data governance, data management, or a related field. Proven experience with Data Governance tools such as Collibra, Apache Atlas, Microsoft PurView, BigID and end to end data governance implementations. Experience with Cloud data quality monitoring and management Proficiency with cloud-native data services and tools on Azure and AWS Strong understanding of data quality principles and experience in defining and implementing data quality rules. Experience in implementing & monitoring data quality remediation workflows to address data quality issues. Experience serving in a data steward role with a thorough understanding of data stewardship responsibilities. Demonstrated experience in defining and maintaining business glossaries. Excellent analytical, problem solving, and organizational skills. Strong communication and interpersonal skills, with the ability to work effectively with cross functional teams. Knowledge of regulatory requirements related to data governance is a plus. Preferred Skills Certification in Data Governance or Data Management (e.g., CDMP, Collibra Certification). Knowledge of the Financial Services domain. Experience with data cataloging and metadata management. Familiarity with data Governance, Quality & Privacy regulations (e.g., GDPR, CCPA, BCBS, COBIT, DAMA-DMBOK). Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Job Title : Sr. DevOps Engineer Job Location : Offshore – India (Pune) Job summary: We are a leading Software as a Service (SaaS) company, revolutionizing the US healthcare industry by leveraging cutting-edge Artificial Intelligence (AI) solutions to transform and manage data. We’re looking for an experienced and passionate DevOps Engineers to join our dynamic team and help us scale our cloud infrastructure to new heights. Experience on Linux, AWS, Kubernetes, Docker and Terraform are essential. Responsibilities: Immediate joiners are preferred. Cloud Infrastructure Management: Provision, configure, and maintain AWS cloud infrastructure using Infrastructure as Code (IaC) principles. Design and implement scalable, reliable, and secure cloud infrastructure using AWS services such as EC2, S3, VPC, Lambda, and others. Automation & CI/CD: Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems. Design, deploy, and manage CI/CD pipelines across multiple environments to ensure seamless application delivery. Automate the code delivery pipeline with goals of achieving one-click deployments, rollbacks, and parameterized builds. Security & Monitoring: Implement best practices for securing cloud infrastructure, including VPC, IAM, security groups, and NACL. Design and deploy monitoring solutions using AWS CloudWatch and ELK Stack to ensure optimal performance of infrastructure and applications. Lead initiatives to strengthen cloud security through the introduction of new solutions and improvements. Collaboration & Leadership: Collaborate with software engineers, QA, and other cross-functional teams to ensure a smooth development and deployment process. Lead projects through design, pilot, and deployment phases for new DevOps and security solutions in production. Desired Profile: Technical Expertise: Proven experience with AWS services such as EC2, S3, Lambda, VPC, CloudFront, API Gateway, ECS, IAM, CloudFormation, and CodeDeploy. Hands-on experience with Infrastructure as Code (IaC) tools like CloudFormation and Terraform. Working knowledge of Kubernetes, Helm, and Argo for managing containers, deploying applications, and automating workflows. Proficiency in containerization using Docker and deployment orchestration tools such as Jenkins and GitHub Actions. Automation & Scripting: Experience with automating infrastructure management and application deployment using Python, Bash, Shell scripting, or similar programming languages. Experience with CI/CD pipelines and build automation. Security & Monitoring: Strong understanding of cloud security principles, including IAM roles, security groups, VPC, and other AWS security mechanisms. Monitoring and performance tuning experience for AWS infrastructure (EC2, RDS, S3). Database & Systems Knowledge: Experience with relational and non-relational databases like RDS and DynamoDB. Strong background in Linux systems administration and experience with web and application server technologies like Apache, Nginx, IIS, etc. Additional Skills: Familiarity with Agile/Scrum development methodologies. Experience in DevOps tools like Chef, Puppet, or Ansible is a plus. Qualification & Experience: Bachelor's or Master’s degrees in Computer Science, Engineering, or any other relevant discipline 3 to 7 years of experience in DevOps, cloud architecture, and AWS infrastructure management. Proven track record of leading or contributing to the development of high-availability, secure, and scalable cloud infrastructure. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Senior Cloud Data Developer We are seeking an exceptional Cloud Data Developer who can bridge the gap between data engineering and cloud-native application development. This role combines strong programming skills with data engineering expertise to build and maintain scalable data solutions in the cloud. Position Overview: Work with cutting-edge cloud technologies to develop data-intensive applications, create efficient data pipelines, and build robust data processing systems using AWS services and modern development practices. Core Responsibilities: Design and develop data-centric applications using Java Spring Boot and AWS services Create and maintain scalable ETL pipelines using AWS EMR and Apache NiFi Implement data workflows and orchestration using AWS MWAA (Managed Workflows for Apache Airflow) Build real-time data processing solutions using AWS SNS/SQS and AWS Pipes Develop and optimize data storage solutions using AWS Aurora and S3 Manage data discovery and metadata using AWS Glue Data Catalog Create search and analytics solutions using AWS OpenSearch Service Design and implement event-driven architectures for data processing Technical Requirements: Primary Skills: Strong proficiency in Java and Spring Boot framework Extensive experience with AWS data services: AWS EMR for large-scale data processing AWS Glue Data Catalog for metadata management AWS OpenSearch Service for search and analytics AWS Aurora for relational databases AWS S3 for data lake implementation Expertise in data pipeline development using: Apache NiFi AWS MWAA AWS Pipes AWS SNS/SQS Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Saras Analytics: We are an ecommerce focused end to end data analytics firm assisting enterprises & brands in data driven decision making to maximize business value. Our suite of work spans extraction, transformation, visualization & analysis of data delivered via industry leading products, solutions & services. Our flagship product is Daton, an ETL tool. We have now ventured into building exciting ease of use data visualization solutions on top of Daton. And lastly, we have a world class data team which understands the story the numbers are telling and articulates the same to CXOs thereby creating value. Where we are Today: We are a boot strapped, profitable & fast growing (2x y-o-y) startup with old school value systems. We play in a very exciting space which is intersection of data analytics & ecommerce both of which are game changers. Today, the global economy faces headwinds forcing companies to downsize, outsource & offshore creating strong tail winds for us. We are an employee first company valuing talent & encouraging talent and live by those values at all stages of our work without comprising on the value we create for our customers. We strive to make Saras a career and not a job for talented folks who have chosen to work with us. The Role: We are seeking an accomplished Lead Data Engineer with strong programming skills, cloud expertise, and in-depth knowledge of Big Query/Snowflake data warehousing technologies. As a key leader in our data engineering team, you will play a critical role in designing, implementing, and optimizing data pipelines, leveraging your expertise in programming, cloud platforms, and modern data warehousing solutions. Responsibilities: Data Pipeline Architecture: Lead the design and architecture of scalable and efficient data pipelines, ensuring optimal performance and reliability. Programming and Scripting: Utilize strong programming skills, particularly in languages like Python, for developing robust and maintainable data engineering solutions. Cloud Platform Expertise: Apply extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) to design, deploy, and optimize data engineering solutions in a cloud environment. BigQuery/Snowflake Knowledge: Demonstrate deep understanding and hands-on experience with BigQuery/Snowflake for efficient data storage, processing, and analysis. ETL Processes: Lead the development of Extract, Transform, Load (ETL) processes, ensuring seamless integration of data from various sources into the data warehouse. Data Modeling and Optimization: Design and implement effective data models to support ETL processes and ensure data integrity and efficiency. Collaboration and Leadership: Collaborate with cross-functional teams, providing technical leadership and guidance to junior data engineers. Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions. Quality Assurance: Implement comprehensive data quality checks and validation processes to ensure the accuracy and completeness of data. Documentation: Create and maintain detailed documentation for data engineering processes, data models, and cloud configurations. Technical Skills: Programming Languages: Expertise in programming languages, with a strong emphasis on Python. Cloud Platforms: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud. Big Data Technologies: Proficiency in big data technologies and frameworks for distributed computing. Data Warehousing: In-depth knowledge of modern data warehousing solutions, with specific expertise in BigQuery/Snowflake. ETL Tools: Experience with ETL tools like Apache NiFi, Talend, or similar. SQL: Strong proficiency in writing and optimizing SQL queries for data extraction, transformation, and loading. Collaboration Tools: Experience using collaboration and project management tools for effective communication and project tracking. Soft Skills: Strong leadership and mentoring capabilities. Excellent communication and presentation skills. Strategic thinking and problem-solving abilities. Ability to work collaboratively in a cross-functional team environment. Educational Qualifications: Bachelor’s or Master’s degree in Computer Science Data Engineering, or a related field. Experience: 8+ years of experience in data engineering roles with a focus on programming, cloud platforms, and data warehousing. If you are an experienced Lead Data Engineer with a strong programming background, cloud expertise, and specific knowledge of BigQuery/Snowflake, we encourage you to apply. Please submit your resume and a cover letter highlighting your technical skills, leadership experience, and contributions to data engineering projects. Show more Show less

Posted 6 days ago

Apply

7.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Familiarity with CI/CD tools and DevOps practices. Expertise in data modeling, ETL processes, and data warehousing.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Overview Job Description Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work.” Seeking an astute individual that has a strong technical foundation with ability to be hands-on on developing/building automation to improve efficiency, productivity, and customer experience. Deep knowledge of industry best practices, with the ability to implement them working with larger team cloud, support, and the product teams. Scope We are seeking a highly skilled AI/Prompt Engineer to design, implement, and maintain artificial intelligence (AI) and machine learning (ML) solutions for our organization. The ideal candidate will have a deep understanding of AI and ML technologies, as well as experience with data analysis, software development, and cloud computing. Primary Responsibilities Design and implement AI/ conversational AI solutions and ML solutions to solve business problems and to improve customer experience and operational efficiency. Develop and maintain machine learning models using tools such as TensorFlow, Keras, and PyTorch Collaborate with cross-functional teams to identify opportunities for AI and ML solutions and develop prototypes and proof-of-concepts. Develop and maintain data pipelines and ETL processes to support AI and ML workflows. Monitor and optimize model performance, accuracy, and scalability Stay up to date with emerging AI and ML technologies and evaluate their potential impact on our organization. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures Provide technical guidance and mentorship to other members of the data engineering and software development teams. Develop and maintain chatbots and voice assistants using tools such as Dialogflow, Amazon Lex, and Microsoft Bot Framework Develop and maintain integrations with third-party systems and APIs to support conversational AI workflows. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures. Provide technical guidance and mentorship to other team members. What We Are Looking For Bachelor’s degree in computer science, Information Technology, or a related field with 3+ years of experience in conversational AI engineering, design, and implementation Strong understanding of NLP technologies, including intent recognition, entity extraction, and sentiment analysis Experience with software development, including proficiency in Python and familiarity with software development best practices and tools (Git, Agile methodologies, etc.) Familiarity with cloud computing platforms (AWS, Azure, Google Cloud) and related services (S3, EC2, Lambda, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Experience with containerization (Docker, Kubernetes) Experience with data visualization tools (Tableau, Power BI, etc.) Experience with reinforcement learning and/or generative models. Experience with machine learning technologies and frameworks (TensorFlow, Keras, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Strong communication and collaboration skills Strong attention to detail and ability to prioritize tasks effectively. Strong problem-solving and analytical skills Ability to work independently and as part of a team. Strong attention to detail and ability to prioritize tasks effectively. Experience working with cloud platforms like AWS, Google Cloud, or Azure. Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka is a plus. Strong problem-solving and analytical skills. Ability to work in an agile and fast-paced development environment. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Title: Production support/Senior production support Location: Mumbai Candidate expectations Candidate must have 5+years of experience in Production support Application support profile. Knowledge on Banking domain and products. Must to accept to rotational shifts (24*7). Job Description Knowledge on Finacle Support is desirable. Finacle Product Knowledge for Payment system. Knowledge on NEFT RTGS, SFMS, Import and Export Application in Banking Sector. IBM MQ Support, JBoss, Apache Tomacat, Java knowledge is desirable. In-depth knowledge in SQL & PL/SQL. Well versed with Shell Scripting, Linux and Windows Platform. ITIL Framework knowledge. Adherence to ISO 9001:2008, ISO 27001, Policies & Procedures Proven experience troubleshooting security issues across various technologies Skills Required RoleProduction support/Senior production support Industry TypeITES/BPO/KPO Functional AreaITES/BPO/Customer Service Required Education Bachelor of Computer Science Employment TypeFull Time, Permanent Key Skills FINACLE ITIL PRODUCTION SUPPORT Other Information Job CodeGO/JC/076/2025 Recruiter Name Key Skills FINACLE ITIL PRODUCTION SUPPORT Other Information Job CodeGO/JC/076/2025 Recruiter Name Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Role: Schneider Electric is seeking a rock star Java Developer who thrives in building scalable, resilient backend systems and isn’t afraid to roll up their sleeves and contribute to frontend work when needed. You’ll be part of a high-impact team driving digital transformation across energy and automation solutions. This role is backend-heavy, working with a modern cloud-native stack. Frontend skills in React are a bonus, useful for occasional UI enhancements or end-to-end feature development. Key Responsibilities: Architect and implement microservices using Spring Boot . Deploy and manage services on Azure Kubernetes Service (AKS) . Design and maintain streaming pipelines with Apache Kafka . Work with MongoDB and SQL Server for data storage and access patterns. Collaborate closely with architects, DevOps, and frontend engineers to build secure, performant applications. Contribute to frontend development in React , as needed. Ensure system reliability, scalability, and performance. Follow agile practices and participate in code reviews, sprint planning, and retrospectives. Required Skills & Experience: 5+ years of backend development experience in Java / Spring Boot . Strong knowledge of Azure Cloud services, particularly AKS (Azure Kubernetes Service) . Experience with Kafka , including stream processing or event-driven architecture. Hands-on experience with MongoDB and SQL Server . Proficiency in REST APIs, secure service communication, and scalable microservices. Working knowledge of Docker and container orchestration. Comfortable working in CI/CD environments (Azure DevOps preferred). Nice-to-Have: Experience building or maintaining frontend apps using React . Exposure to OAuth2, OpenID Connect, or other identity protocols. Knowledge of API Gateway design, caching, and distributed systems. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 - 0 Lacs

Puducherry

On-site

GlassDoor logo

Title: Python Developer Experience: 5-6 Years Location: Puducherry (On-site) Job Summary: We’re hiring a skilled Python Developer to design, build, and optimize scalable applications. You’ll collaborate with cross-functional teams to deliver high-performance solutions while adhering to best practices in coding, testing, and deployment. Key Responsibilities: ✔ Backend Development:Design and implement robust APIs using Django/Flask/FastAPI.Integrate with databases (PostgreSQL, MySQL, MongoDB).Optimize applications for speed and scalability. ✔ Cloud & DevOps:Deploy apps on AWS/Azure/GCP (Lambda, EC2, S3).Use Docker/Kubernetes for containerization.Implement CI/CD pipelines (GitHub Actions/Jenkins). ✔ Data & Automation:Develop ETL pipelines with Pandas, NumPy, Apache Airflow.Automate tasks using Python scripts. ✔ Collaboration:Work with frontend teams (React/JS) for full-stack integration.Mentor junior developers and conduct code reviews.Skills RequiredMust-Have:5+ years of Python development (OOP, async programming). Frameworks: Django/Flask/FastAPI.Databases: SQL/NoSQL (e.g., PostgreSQL, MongoDB).APIs: RESTful/gRPC, authentication (OAuth, JWT).DevOps: Docker, Git, CI/CD, AWS/Azure basics. Good-to-Have: Frontend basics: HTML/CSS, JavaScript. Perks & Benefits: Competitive salary How to Apply: Send your resume and GitHub/portfolio links to hr@cloudbeestech.com with the subject: "Python Developer Application ". Job Type: Full-time Pay: ₹40,000.00 - ₹50,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person Application Deadline: 15/06/2025

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Test Lead Gurgaon/Bangalore, India The Test Lead will be responsible for driving quality assurance strategies, processes, and standards for company’s diverse suite of business applications, ensuring they meet the high-quality standards. This role involves collaborating with cross-functional teams consisting of both AXA XL staff and testing vendors. Primary responsibilities include guiding our testing teams with planning, developing and executing comprehensive testing strategies to support various IT delivery teams within AXA XL’s Global Technology (GT) organization and make sure adherence by conducting regular process audits to identify gaps and then take the appropriate corrective action. The role holder will also demonstrate a best-in-class level of expertise in test management and testing best practices. What You’ll Be Doing What will your essential responsibilities include? Lead & manage testing teams and business applications, within portfolio, ensuring optimal performance Take responsibility for all types of testing performed by the testing teams including functional and non-functional as well as user acceptance testing, when required. Develop and implement testing strategies that align with organizational goals and industry best practices Oversee the test automation strategy, including selection of appropriate tools and technologies Collaborate with our testing vendor partners as well as our AXA XL delivery team members to make sure comprehensive testing coverage and the timely delivery of software changes/enhancements Act as the primary point of contact for all testing-related inquiries, escalations and coordination with AXA XL stakeholders. Coordinate with project managers, business analysts, and development teams to define UAT scope and objectives, particularly in relation to property and casualty insurance products. Facilitate UAT sessions, including training end-users and gathering feedback on the product, with a focus on property and casualty functionalities. take part in and contribute to TCoE support initiatives and projects as required Work with the TCoE team to understand best practices and effectively implement them on your assigned applications to achieve our expected quality results. Maintain a shift-left methodology by encouraging test automation first approach Oversee the planning, execution, and reporting of end-to-end system integration testing, ensuring thorough validation of software applications. Monitor testing progress to create sufficient visibility for stakeholders & management to understand the status of testing at any time. Review testing artifacts created by testing teams such as test strategy, test plan, test summary report etc. to make sure they meet industry standards. Identify, communicate & track testing risks and issues then help develop mitigation plans to bring them to closure Help estimate new testing work requests and manage estimates against actuals to make sure change controls are appropriately managed Provide guidance and training to your assigned testing teams on our TCoE’s best practices, tools, and methodologies. Define, collect, and analyze key performance indicators (KPIs) & metrics to evaluate testing effectiveness and drive improvements. What You Will BRING We’re looking for someone who has these abilities and skills: Bachelor’s degree in computer science, Information Technology, or a related field. effective understanding of software development methodologies, including agile Proficiency in testing frameworks, tools, and best practices, especially TMMi and ISTQB. robust knowledge of the various types of software testing - static, smoke, system, system integration, regression, UAT, performance, compatibility etc. Extensive experience in designing and development of test automation frameworks. Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and microservices architecture. Experience with testing tools such as Selenium, Apache JMeter, Gatling, UFT & Performance Center Proven leadership and team management skills, with the ability to motivate and guide diverse teams. Excellent interpersonal and communication skills to effectively collaborate with both technical and non-technical stakeholders. Experience with property & casualty insurance lines of business and products will be preferred Experience in implementing the GenAI based solutions to optimize testing processes You will report to TCoE Lead Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

As a Site Reliability Engineer at Thomson Reuters, you will be responsible for continually looking to optimize systems and services for security, automation, performance and availability, while ensuring solutions developed, adhere and align to architectural standards. You will be responsible for ensuring that technology systems and related procedures adhere to organizational values. You’ll be able to deliver innovative solutions that impact our customers business every day and partner with teammates globally to develop innovative solutions for the best products in the market. We are committed to the growth of our Engineering teams and want people who are just as excited by this as we are! About the Role In this opportunity, as a Site Reliability Engineer you will: Work with a wide array of technologies to meet objectives including: Windows and Linux Servers MSSQL, MySQL, Oracle and PostgreSQL Database Servers Tomcat, Apache and IIS Applications Servers Jenkins or other CICD tooling Enterprise Storage Software deployment and lifecycle management software Public and Private Cloud Technologies (AWS, Azure preferred), including CICD experience in AWS. Performs daily tasks and ongoing projects relative to operational and application support of TRTA Professional products. These tasks include monitoring, backup / restores, application availability, scheduled / non-scheduled maintenance, release management, code deployment and configuration management. Participates in the implementation, deployment, and maintenance of TRTA Professional online applications and related tooling. Participate in on-call rotation. Drives efficiency and accuracy through scripting, automation, documentation, and continual process improvement. Possess a strong troubleshooting mentality and asks questions about architecture or implementation. Provides support during complex and/or major incidents. Produces, delivers, and maintains appropriate documentation for systems in accordance document control standards and procedures. Collaborates with business, third party vendors, developers, application support and technical operations groups to determine appropriate hardware/software needed and to resolve any issues impacting the application processes. Identifies risks & issues and takes ownership to deliver appropriate resolutions. Fully familiarizes self with all aspects of the developed code. Work with leadership to provide regular status reports of given project tasks About You You’re a fit our Site Reliability Engineer roles if you have: Bachelor’s degree in computer science, Computer Engineering or Information Systems or equivalent experience 3+ years of experience hosting cloud native applications on the public cloud Solid Windows and/or Open Source (Linux) Systems Administration skillset Experience with Amazon Web Services technologies including Cloud Formation, Elastic Load Balancer, Auto Scaling Groups, Route 53 and Cloud Watch Experience in Azure Services Experience with Software Development Life Cycle and deployment methodologies Experience with migrating large-scale applications to the Public Cloud Experience with automating systems in Python and/or PowerShell Experience with systems that connect to central storage devices, NAS or SAN Experience with monitoring and alerting (Datadog preferred) Experience with containerization of microservices Experience with building CICD pipelines (AWS preferred) Excellent communication and interpersonal skills Fundamental understanding of networking technologies #LI-PS1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Summary: Qualcomm is seeking a seasoned Staff Engineer, DevOps to join our central software engineering team. In this role, you will lead the design, development, and deployment of scalable cloud-native and hybrid infrastructure solutions, modernize legacy systems, and drive DevOps best practices across products. This is a hands-on architectural role ideal for someone who thrives in a fast-paced, innovation-driven environment and is passionate about building resilient, secure, and efficient platforms. Key Responsibilities: Architect and implement enterprise-grade AWS cloud solutions for Qualcomm’s software platforms. Design and implement CI/CD pipelines using Jenkins, GitHub Actions, and Terraform to enable rapid and reliable software delivery. Develop reusable Terraform modules and automation scripts to support scalable infrastructure provisioning. Drive observability initiatives using Prometheus, Grafana, Fluentd, OpenTelemetry, and Splunk to ensure system reliability and performance. Collaborate with software development teams to embed DevOps practices into the SDLC and ensure seamless deployment and operations. Provide mentorship and technical leadership to junior engineers and cross-functional teams. Manage hybrid environments, including on-prem infrastructure and Kubernetes workloads supporting both Linux and Windows. Lead incident response, root cause analysis, and continuous improvement of SLIs for mission-critical systems. Drive toil reduction and automation using scripting or programming languages such as PowerShell, Bash, Python, or Go. Independently drive and implement DevOps/cloud initiatives in collaboration with key stakeholders. Understand software development designs and compilation/deployment flows for .NET, Angular, and Java-based applications to align infrastructure and CI/CD strategies with application architecture. Required Qualifications: 10+ years of experience in IT or software development, with at least 5 years in cloud architecture and DevOps roles. Strong foundational knowledge of infrastructure components such as networking, servers, operating systems, DNS, Active Directory, and LDAP. Deep expertise in AWS services including EKS, RDS, MSK, CloudFront, S3, and OpenSearch. Hands-on experience with Kubernetes, Docker, containerd, and microservices orchestration. Proficiency in Infrastructure as Code using Terraform and configuration management tools like Ansible and Chef. Experience with observability tools and telemetry pipelines (Grafana, Prometheus, Fluentd, OpenTelemetry, Splunk). Experience with agent-based monitoring tools such as SCOM and Datadog. Solid scripting skills in Python, Bash, and PowerShell. Familiarity with enterprise-grade web services (IIS, Apache, Nginx) and load balancing solutions. Excellent communication and leadership skills with experience mentoring and collaborating across teams. Preferred Qualifications: Experience with api gateway solutions for API security and management. Knowledge on RDBMS, preferably MSSQL/Postgresql is good to have. Proficiency in SRE principles including SLIs, SLOs, SLAs, error budgets, chaos engineering, and toil reduction. Experience in core software development (e.g., Java, .NET). Exposure to Azure cloud and hybrid cloud strategies. Bachelor’s degree in Computer Science or a related field Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 6 days ago

Apply

0 years

15 Lacs

Hyderābād

On-site

GlassDoor logo

Design, develop, and maintain robust and scalable backend systems using Java 17, Spring Boot , and Microservices . Implement asynchronous data processing using Apache Kafka . Work with MySQL for complex data queries and schema design. Collaborate on responsive UI development using ReactJS or Angular (optional but preferred). Leverage GitHub Copilot to enhance productivity, code quality, and development speed. Integrate APIs, conduct peer code reviews, and participate in unit, integration, and performance testing. Contribute to DevOps workflows including CI/CD, containerization, and cloud deployment. Java/Spring Migration Banking/Finance domain experience. Strong hands-on experience in Selenium, BDD Writing Junit test cases. Complete deployment experience Proficient in Java 17 , Spring Boot , JAX-B , J2EE , and Microservices architecture . Experience with Kafka , MySQL , and REST API development. Familiarity with GitHub Copilot to aid in rapid coding and prototyping. Understanding of secure, scalable, and maintainable coding practices. Exposure to DevOps tools , Docker , Kubernetes , and CI/CD pipelines. Version control with Git , experience with GitHub workflows preferred. Job Type: Full-time Pay: From ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Work Location: In person

Posted 6 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

This role is for one of Weekday's clients Salary range: Rs 1500000 - Rs 3000000 (ie INR 15-20 LPA) Min Experience: 10 years Location: Chennai, hybrid JobType: full-time Work Model/Location: Offshore, Contract to Hire Work Team Organization: Ecommerce Engineering Requirements What youll be doing: We are seeking a Senior Full Stack Developer/Architect with extensive expertise in eCommerce architecture frameworks, particularly Microservices and Micro-frontend Architecture. You will lead the design and implementation of scalable, high-performance solutions for our cloud-hosted eCommerce platforms. This role requires collaboration with product managers, business leaders, and cross-functional teams to modernize systems and deliver exceptional business capabilities. As part of the eCommerce team, you will work with both business and technology teams to design and develop in-house data-driven solutions for complex decision-making problems using computer science, analytics, mathematical optimization, and machine learning. You will also work closely with product and program management to derive application requirements, set expectations, and communicate progress. What you bring to the table: Execution Focus: Highly motivated and self-driven, with a proven track record of efficiently and effectively executing business objectives Business Alignment: Ability to bridge technology with business strategy, ensuring technical solutions align with organizational goals while effectively communicating with stakeholders Performance Optimization: Proven ability to enhance site performance by optimizing Core Web Vitals, ensuring rapid load times and superior user engagement ADA Compliance: Commitment to ensuring applications meet ADA compliance standards, guaranteeing accessibility for all users Full Stack Development: Proficient in developing applications using Java/Spring Boot for backend services and React with TypeScript for frontend interfaces Microservice Architecture: Expertise in designing and implementing microservices for seamless integration across distributed systems Micro Frontend Architecture: Experience in architecting modular front-end applications using Micro Frontend (MFE) solutions for enhanced scalability Database Expertise: Hands-on experience with distributed databases such as Couchbase and relational databases like MySQL, along with a solid grasp of NoSQL data management Messaging Systems: Familiarity with distributed messaging systems (e.g., Solace, Azure EventHub, or Apache Kafka) for reliable inter-service communication Data Pipelines: Skilled in constructing efficient data pipelines for both stream and batch processing to support large-scale data analysis Technology Evolution: Proactive approach to staying updated on industry trends, continuously evaluating new technologies to enhance our tech stack Whats needed- Basic Qualifications: Experience 10+ years of experience in architecting and developing scalable applications as a Full-Stack Engineer, particularly in the eCommerce sector 7+ years of hands-on programming experience in modern languages such as Java, Spring Boot, and NodeJS 5+ years of proficiency in building applications using React JS/React Native with TypeScript Extensive experience (7+ years) designing microservices architectures within cloud-native environments. Technical Skills: Mastery of technologies including React JS, Next JS, Node JS, Java, and Spring Boot Experience with both NoSQL databases (Couchbase) and relational databases (MySQL) Familiarity with messaging systems like Solace or Apache Kafka for event-driven architectures Deep understanding of implementing Headless Commerce solutions Experience implementing ADA compliance standards within web applications Proven track record in optimizing performance metrics such as Core Web Vitals for eCommerce applications, ensuring fast, responsive, and user-friendly experiences Strong experience with log debugging and performance monitoring using tools like Splunk and New Relic, combined with expertise in analyzing browser metrics via Chrome DevTools, WebPageTest, and other diagnostics to troubleshoot and optimize frontend performance Strong understanding of automated testing practices including unit, integration, and end-to-end (E2E) testing across frontend and backend. Familiar with TDD and collecting/testing quality metrics to ensure robust and reliable software Experience with CI/CD pipelines, cross-platform deployments, and managing multi-cloud, multi-environment system setups for scalable application delivery Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Overview: WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ Responsibilities: MTS SILICON DESIGN ENGINEER THE ROLE: The position will involve working with a very experienced physical design team of Server SOC and is responsible for delivering the physical design of tiles to meet challenging goals for frequency, power and other design requirements for AMD next generation processors in a fast-paced environment on cutting edge technology. THE PERSON: Engineer with good attitude who seeks new challenges and has good analytical and and problem-solving skills. Candidate needs to have the ability and desire to learn quickly and should be a good team player who has excellent communication skills and experience collaborating with other engineers located in different sites/timezones. KEY RESPONSIBILITIES: Implementing RTL to GDS2 flow Handling Floor-plan, Physical Implementation of Power-plan, Synthesis, Placement, CTS, Timing Closure, Routing, Extraction, Physical Verification (DRC & LVS), Crosstalk Analysis, EM/IR Handling different PNR tools - Synopsys ICC2, ICC, Design Compiler, PrimeTime, StarRC, Mentor Graphics Calibre, Apache Redhawk PREFERRED EXPERIENCE: 8+ years of professional experience in physical design, preferably with high performance designs. Experience in automated synthesis and timing driven place and route of RTL blocks for high speed datapath and control logic applications. Experience in automated design flows for clock tree synthesis, clock and power gating techniques, scan stitching, design optimization for improved timing/power/area, and design cycle time reduction. Experience in floorplanning, establishing design methodology, IP integration, checks for logic equivalence, physical/timing/electrical quality, and final signoff for large IP delivery Strong experience with tools for logic synthesis, place and route, timing analysis, and design checks for physical and electrical quality, familiarity with tools for schematics, layout, and circuit/logic simulation Versatility with scripts to automate design flow. Strong communication skills, ability to multi-task across projects, and work with geographically spread out teams Experience in FinFET & Dual Patterning nodes such as 16/14/10/7/5nm Excellent physical design and timing background. Good understanding of computer organization/architecture is preferred. Strong analytical/problem solving skills and pronounced attention to details. ACADEMIC CREDENTIALS: Qualification: Bachelors or Masters in Electronics/Electrical Engineering LOCATION: Hyderabad / Bangalore #LI-PK2 Qualifications: Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Platform Support Provide technical support and troubleshoot issues related to the Starburst Enterprise Platform. Ensure platform performance, availability, and reliability using Helm charts for resource management. Deployment And Configuration Manage deployment and configuration of the Starburst Enterprise Platform on Kubernetes using Helm charts and YAML-based values files. Build and maintain Docker images as needed to support efficient, scalable deployments and integrations. Employ GitHub Actions for streamlined CI/CD processes. User Onboarding And Support Assist in onboarding users by setting up connections, catalogs, and data consumption client tools. Address user queries and incidents, ensuring timely resolution and issue triage. Maintenance And Optimization Perform regular updates, patching, and maintenance tasks to ensure optimal platform performance. Conduct application housekeeping, user query logs, and access audits. Scripting And Automation Develop automation scripts using Python and GitHub pipelines to enhance operational efficiency. Document workflows and ensure alignment with business objectives. Broader Knowledge And Integration Maintain expertise in technologies like Immuta, Apache Ranger, Collibra, Snowflake, PostgreSQL, Redshift, Hive, Iceberg, dbt, AWS Lambda, AWS Glue, and Power BI. Provide insights and recommendations for platform improvements and integrations. New Feature Development And Integration Collaborate with feature and product development teams to design and implement new features and integrations with other data product value chain systems and tools. Assist in defining specifications and requirements for feature enhancements and new integrations. Automation And Innovation Identify opportunities for process automation and implement solutions to enhance operational efficiency. Innovate and contribute to the development of new automation tools and technologies. Incident Management Support incident management processes, including triaging and resolving technical challenges efficiently. Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Experience supporting and maintaining applications deployed on Kubernetes using Helm charts and Docker images. Understanding of RDS, GitHub Actions, and CI/CD pipelines. Proficiency in Python and YAML scripting for automation and configuration. Excellent problem-solving skills and the ability to support users effectively. Strong verbal and written communication skills. Preferred Qualifications Experience working with Kubernetes (k8s). Knowledge of data and analytical products like Immuta, Apache Ranger, Collibra, Snowflake, PostgreSQL, Redshift, Hive, Iceberg, dbt, AWS Lambda, AWS Glue, and Power BI. Familiarity with cloud environments such as AWS. Knowledge of additional scripting languages or tools is a plus. Beneficial Experience Exposure to Starburst or other data virtualization technologies like Dremio, Trino, Presto, and Athena. Show more Show less

Posted 6 days ago

Apply

1.0 years

1 - 4 Lacs

Hyderābād

On-site

GlassDoor logo

Job Title: Data Analyst – AdTech (1+ Years Experience) Location: Hyderabad Experience Level: 2–3 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About the Role: We are looking for a highly motivated and detail-oriented Data Analyst with 1+ years of experience to join our AdTech analytics team. In this role, you will be responsible for working with large-scale advertising and digital media datasets, building robust data pipelines, querying and transforming data using GCP tools, and delivering insights through visualization platforms like Looker Studio, Looker, Tableau etc Key Responsibilities: Analyze AdTech data (e.g., ads.txt, programmatic delivery, campaign performance, revenue metrics) to support business decisions. Design, develop, and maintain scalable data pipelines using GCP-native tools (e.g., Cloud Functions, Dataflow, Composer). Write and optimize complex SQL queries in BigQuery for data extraction and transformation. Build and maintain dashboards and reports in Looker Studio to visualize KPIs and campaign performance. Collaborate with cross-functional teams including engineering, operations, product, and client teams to gather requirements and deliver analytics solutions. Monitor data integrity, identify anomalies, and work on data quality improvements. Provide actionable insights and recommendations based on data analysis and trends. Required Qualifications: 1+ years of experience in a data analytics or business intelligence role. Hands-on experience with AdTech datasets and understanding of digital advertising concepts. Strong proficiency in SQL, particularly with Google BigQuery. Experience building and managing data pipelines using Google Cloud Platform (GCP) tools. Proficiency in Looker Studio Strong problem-solving skills and attention to detail. Excellent communication skills with the ability to explain technical topics to non-technical stakeholders. Preferred Qualifications: Experience with additional visualization tools such as Tableau, Power BI, or Looker (BI). Exposure to data orchestration tools like Apache Airflow (via Cloud Composer). Familiarity with Python for scripting or automation. Understanding of cloud data architecture and AdTech integrations (e.g., DV360, Ad Manager, Google Ads).

Posted 6 days ago

Apply

7.0 - 9.0 years

4 - 8 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

7 - 9 Years 1 Opening Trivandrum Role description Role Proficiency: Create and Organise testing process based on project requirement and manage test activities within team Outcomes: Test Estimates and Schedules-. Ensure Test Coverage Produce test results defect reports test logs and reports to evidence for testing Publish RCA reports and preventive measures Ensure Quality of Deliverables Report project metrics and status Ensure adherence of Engineering practices processes and standards Understand and contribute to test automation/performance testing Work with DevOps team when required; to understand testing framework and QA process for implementing continuous testing Manage team utilization Measures of Outcomes: Test Script Creation and Execution Productivity Defect Leakage Metrics (% of defect leaked % of UAT defects and % of Production defects) % of Test case reuse Test execution Coverage Defect Acceptance Ratio Test Review efficiency On-time delivery Effort Variance Test Automation Coverage Outputs Expected: Supporting Organization: Ensure utilization and quality of deliverables prepared by the team Co-ordinate Test Environment and Test Data provisioning Test Design Development Execution: Participate in review walkthrough demo and obtain sign off by stakeholder Prepare Test Summary Report for modules/features Requirements Management: Analyse Prioritize Identify Gaps; create workflow diagrams based on Requirements/User stories Manage Project: Participate in Test management Preparing Tracking and Reporting of Test progress based on schedule Domain relevance: Identify business processes conduct risk analysis and ensure test coverage Estimate: Prepare Estimate Schedule Identify dependencies Knowledge Management: Consume Contribute Review (Best Practices Lesson learned Retrospective) Test Design Execution: Test Plan preparation Test Case/Script Creation Test Execution Risk Identification: Identification of risk/issues and prepare Mitigation and Contingency plans Test & Defect Management: Conduct root cause and trend analysis of the defects Test Planning: Identify the test scenarios with understanding of systems interfaces and application Identify end-to-end business critical scenarios with less support Create and review the test scenarios and prepare RTM Prepare estimates (time /effort) based on the requirements/User stories Identify scope of testing Client Management: Define KPIs to the engagement and ensure adherence to these KPIs. Stakeholder Connect: Handle monthly/weekly governance calls and represent issues for the team Skill Examples: Ability to Create Review and manage a test plan Ability to prepare schedules based on estimates Ability to track report progress and take corrective measures on need basis Ability to identify test scenarios and prepare RTM Ability to analyze requirement/user stories and prioritize testing Ability to carry out RCA Ability to capture and report metrics Ability to identify Test Data and Test Env. Specifications Knowledge Examples: Knowledge of Estimation techniques Knowledge of Testing standards Knowledge of identifying the scope of testing Knowledge of RCA Techniques Knowledge of Test design techniques Knowledge of Test methodologies Knowledge of scope identification and planning Knowledge of Test automation tools and frameworks Additional Comments: • Design, develop, and execute automated performance test scripts using tools such as Apache JMeter. • Define test strategies and performance testing plans to validate scalability, stability, and reliability of applications. • Collaborate with developers, architects, and DevOps teams to ensure applications meet performance expectations. • Analyze test results and metrics, including CPU usage, memory consumption, garbage collection, throughput, and response time. • Diagnose and troubleshoot performance issues in pre-production and production environments. • Utilize AppDynamics, ElasticSearch, OpenSearch, Grafana, and Kafka for real-time monitoring and performance visualization. • Perform root cause analysis to detect memory leaks, connection issues, and other system bottlenecks. • Document findings, create performance reports, and present results to stakeholders with clear recommendations. • Maintain performance baselines and monitor deviations over time. • Drive performance tuning efforts across application layers including database, services, and infrastructure. • Participate in capacity planning and support system scaling efforts. ________________________________________ Required Skills & Experience: • Proficiency in performance testing tools such as Apache JMeter or similar. • Experience with performance monitoring tools like AppDynamics, Grafana, or OpenSearch. • Deep understanding of microservices architectures and cloud environments Azure • Strong experience with test planning, workload modeling, and test data management. • Solid experience analyzing system performance metrics across app servers, databases, OS, and network layers. • Demonstrated ability to communicate performance insights through clear, actionable reporting. • Familiarity with CI/CD pipelines, integration with performance tests, and automated workflows. • Good understanding of DB tuning, application server tuning, and best practices for scalable architecture. • Ability to work independently and collaboratively in a fast-paced, agile environment. • Understanding about Kafka-based event streaming architectures. • Knowledge of scripting languages such as Python, Groovy, or Shell for automation. Skills Performance Engineering,Apache Jmeter,Elastic Search,Grafana About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 6 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies