Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a highly skilled and motivated Python, AWS, Big Data Engineer to join our data engineering team. The ideal candidate should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. Your responsibilities will include designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. Virtusa is a company that values teamwork, quality of life, and professional and personal development. We are proud to have a team of 27,000 people globally who care about your growth and seek to provide you with exciting projects, opportunities, and work with state-of-the-art technologies throughout your career with us. At Virtusa, we believe in the potential of great minds coming together. We emphasize collaboration and a team environment, providing a dynamic place for talented individuals to nurture new ideas and strive for excellence.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Are you passionate about developing mission-critical, high-quality software solutions, using cutting-edge technology in a dynamic environment Compliance Engineering, a global team of over 300 engineers and scientists, is dedicated to working on the most complex, mission-critical problems. The team builds and operates a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Leveraging the latest technology and vast amounts of structured and unstructured data, we use modern frameworks to build responsive and intuitive front-end and Big Data applications. As the firm invests significantly in uplifting and rebuilding the Compliance application portfolio, Compliance Engineering seeks to fill several Systems Engineer roles. As a member of our team, you will partner globally with users, development teams, and engineering colleagues across multiple divisions to facilitate the onboarding of new business initiatives and test and validate Compliance Surveillance coverage. You will have the opportunity to learn from experts, train and mentor team members, leverage various technologies including Java, Python, PySpark, and other Big Data technologies, innovate and incubate new ideas, and be involved in the full software development life cycle. A successful candidate will possess a Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study, expertise in Java development, debugging, and problem-solving, as well as experience in delivery or project management. The ability to clearly express ideas and arguments in meetings and on paper is essential. Experience in relational databases, Hadoop and Big Data technologies, knowledge of the financial industry (particularly in the Capital Markets domain), and compliance or risk functions is desired and can set you apart from other candidates. Goldman Sachs, a leading global investment banking, securities, and investment management firm founded in 1869 and headquartered in New York, is committed to fostering diversity and inclusion in the workplace and beyond. The firm provides numerous opportunities for professional and personal growth, from training and development to firmwide networks, benefits, wellness, personal finance offerings, and mindfulness programs. Learn more about the culture, benefits, and people at GS.com/careers. Goldman Sachs is an equal employment/affirmative action employer dedicated to finding reasonable accommodations for candidates with special needs or disabilities during the recruiting process.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As an AWS Data Engineer with over 5 years of experience, you will be working in Chennai (WFO) and will be expected to attend a Face to Face interview on 26 July (Saturday) during IST hours. Your preferred domain should be in Life Sciences / Pharma, which is good to have. Your mandate includes possessing key skill sets in AWS, Python, Databricks, Pyspark, and SQL. Your primary responsibilities will involve designing, building, and maintaining scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. You will also be responsible for integrating data from multiple sources, ensuring accurate transformation and storage in optimal formats such as Delta Lake, Redshift, and S3. Additionally, you will optimize data processing and storage systems for cost efficiency and high performance while managing compute resources and cluster configurations. Automation of data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks will be a crucial part of your role. Implementing data quality checks, validation rules, and transformation logic to guarantee accuracy, consistency, and reliability of data will also be essential. In terms of cloud platform management, you will manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Leading migrations from legacy data systems to modern cloud-based platforms and implementing cost optimization strategies will also be part of your responsibilities. Ensuring data security by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards will be vital. Lastly, collaborating with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks will be key to your success in this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are an experienced Databricks on AWS and PySpark Engineer looking to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark. You will also be responsible for developing and optimizing data processing workflows, collaborating with data scientists and analysts, ensuring data quality, security, and compliance, troubleshooting data pipeline issues, and staying updated with industry trends in data engineering and big data. Your responsibilities will include: - Designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark - Developing and optimizing data processing workflows using PySpark and Databricks - Collaborating with data scientists and analysts to design and implement data models and architectures - Ensuring data quality, security, and compliance with industry standards and regulations - Troubleshooting and resolving data pipeline issues and optimizing performance - Staying up-to-date with industry trends and emerging technologies in data engineering and big data Requirements: - 3+ years of experience in data engineering, with a focus on Databricks on AWS and PySpark - Strong expertise in PySpark and Databricks, including data processing, data modeling, and data warehousing - Experience with AWS services such as S3, Glue, and IAM - Strong understanding of data engineering principles, including data pipelines, data governance, and data security - Experience with data processing workflows and data pipeline management Soft Skills: - Excellent problem-solving skills and attention to detail - Strong communication and collaboration skills - Ability to work in a fast-paced, dynamic environment - Ability to adapt to changing requirements and priorities If you are a proactive and skilled professional with a passion for data engineering and a strong background in Databricks on AWS and PySpark, we encourage you to apply for this opportunity.,
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with PySpark, Scala. - Strong understanding of data integration and ETL processes. - Familiarity with cloud computing concepts and services. - Experience in application lifecycle management and agile methodologies. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Notice 30 days to immediate Experience Required: 8+ years in data engineering and software development Job Description: We are seeking a Lead Data Engineer with strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL to drive complex data solutions across multi-functional teams. The ideal candidate will have hands-on experience with data modeling, test-driven development, and Agile/Waterfall methodologies. You’ll lead initiatives, collaborate across teams, and translate business needs into scalable data solutions using best practices in managed services or staff augmentation environments.
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description This role is within enterprise data office and product solution team; focused on ensuring accurate, timely, and fit for purpose data for business, risk management and regulatory reporting requirements. Engage with Markets, Risk, Finance, Tech, Client org, and Data Engineering Teams to gather requirements, understand application processing, identify gaps and design systematic solution for business needs. Average day is highly collaborative, focusing on reaching out to application teams and users to understand Markets products processing in Regulatory Reporting data Flow. Document product data flows along with transformation /mapping/enrichments/logic within enterprise systems. Key Responsibilities: Understand Derivatives data flows within CITI for Equities, FX, IRS, Fixed Income, Commodities etc. Data analysis for derivatives products across systems for target state adoption and resolution of data gaps/issues Works in close co-ordination with Technology, Business Managers and other stakeholders to fulfill the delivery objectives Creates a product vision aligned to business priorities and corresponding road-map to delivery Partners with senior team members and leaders and a widely distributed global user community to define and implement solutions Lead assessment of end-to-end data flows for all data elements used in Regulatory Reports Document current and target states data mapping and produce gap assessment Coordinate with the business for identifying critical data elements, defining standards and quality expectations, and prioritize remediation of data issues Identify appropriate strategic source for critical data elements Design and Implement data governance controls including data quality rules and data reconciliation Design systematic solution for elimination of manual processes/adjustments and remediation of tactical solutions Prepare detailed requirement specifications containing calculations, data transformations and aggregation logic Perform functional testing and data validations Skills & Qualification 10+ years of combined experience in banking and financial services industry, information technology and/or data controls and governance. Preferably Engineering Graduate with Post Graduation in Finance Extensive experience in the capital markets business and processes Deep understanding of Derivative products (i.e., Equities, FX, IRS, Commodities etc.) Strong Data analysis skills using Excel, SQL, Python, Pyspark etc. Experience with data management processes and tools and applications, including process mapping and lineage toolsets Actively managed various aspects of data initiatives including analysis, planning, execution, and day-to-day production management Ability to identify and solve problems throughout the product development process Analytical thinking – ability to break down complex data structures and processes to identify issues and develop logical models that meet business needs. Strong knowledge of structured/unstructured databases, data modeling, data management, rapid / iterative development methodologies and data governance tools Strong understanding of data governance issues, policies, regulatory requirements, and industry information affecting the business environment Demonstrated stakeholder management skills Excellent communication skills – needs to be able to communicate with technical and non-technical stakeholders to gather requirement, and develop clear documentation Excellent presentation skills, business and technical writing, and verbal communication skills to support decision-making and actions Excellent problem-solving and critical thinking skills to recognize and comprehend complex data flow and designs. Self-motivated and able to dynamically determine priorities Data visualization skills – can help in creating visual representation of data models and provide input to UX / UI team to help make it easier to communicate complex model relationships with stakeholders ------------------------------------------------------ Job Family Group: Product Management and Development ------------------------------------------------------ Job Family: Product Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Duties for this role include but not limited to: supporting the design, build, test and maintain data pipelines at big data scale. Assists with updating data from multiple data sources. Work on batch processing of collected data and match its format to the stored data, make sure that the data is ready to be processed and analyzed. Assisting with keeping the ecosystem and the pipeline optimized and efficient, troubleshooting standard performance, data related problems and provide L3 support. Implementing parsers, validators, transformers and correlators to reformat, update and enhance the data. Data Engineers play a pivotal role within Dataworks, focused on creating and driving engineering innovation and facilitating the delivery of key business initiatives. Acting as a “universal translator” between IT, business, software engineers and data scientists, data engineers collaborate across multi-disciplinary teams to deliver value. Data Engineers will work on those aspects of the Dataworks platform that govern the ingestion, transformation, and pipelining of data assets, both to end users within FedEx and into data products and services that may be externally facing. Day-to-day, they will be deeply involved in code reviews and large-scale deployments. Essential Job Duties & Responsibilities Understanding in depth both the business and technical problems Dataworks aims to solve Building tools, platforms and pipelines to enable teams to clearly and cleanly analyze data, build models and drive decisions Scaling up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique Collaborating across teams to drive the generation of data driven operational insights that translate to high value optimized solutions. Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases Interacting with senior technologists from the broader enterprise and outside of FedEx (partner ecosystems and customers) to create synergies and ensure smooth deployments to downstream operational systems Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, distributed systems Fluency with distributed and cloud environments and a deep understanding of optimizing computational considerations with theoretical properties Experience in building robust cloud-based data engineering and curation solutions to create data products useful for numerous applications Detailed knowledge of the Microsoft Azure tooling for large-scale data engineering efforts and deployments is highly preferred. Experience with any combination of the following azure tools: Azure Databricks, Azure Data Factory, Azure SQL D, Azure Synapse Analytics Developing and operationalizing capabilities and solutions including under near real-time high-volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value Direct experience having built and deployed robust, complex production systems that implement modern, data processing methods at scale Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle, and to work through problems as they are still being defined Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews Ability to conduct data analysis, investigation, and lineage studies to document and enhance data quality and access Use of agile and devops practices for project and software management including continuous integration and continuous delivery Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), Kafka and other high-volume data tools SQL and NoSQL storage tools, such as MySQL, Postgres, MongoDB/CosmosDB Java, Python data tools Azure DevOps experience to track work, develop using git-integrated version control patterns, and build and utilize CI/CD pipelines Working knowledge and experience implementing data architecture patterns to support varying business needs Experience with different data types (json, xml, parquet, avro, unstructured) for both batch and streaming ingestions Use of Azure Kubernetes Services, Eventhubs, or other related technologies to implement streaming ingestions Experience developing and implementing alerting and monitoring frameworks Working knowledge of Infrastructure as Code (IaC) through Terraform to create and deploy resources Implementation experience across different data stores, messaging systems, and data processing engines Data integration through APIs and/or REST service PowerPlatform (PowerBI, PowerApp, PowerAutomate) development experience a plus Additional Job Description Analytical Skills, Accuracy & Attention to Detail, Planning & Organizing Skills, Influencing & Persuasion Skills, Presentation Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 week ago
2.0 - 6.0 years
8 - 18 Lacs
Gurugram
Remote
Role Characteristics: Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner. You Will: Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering) Derive insights and putting them together to build a story to solve a given problem Suggest ways for process improvements in terms of script optimization, automating repetitive tasks Create and automate reports and dashboards through Python to track certain metrics basis given requirements Automate reports and dashboards through Python Technical Skills (Must have) B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields 4-6 years of experience in working with data and conducting statistical and/or numerical analysis Ability to write SQL code Scripting/automation using python Hands on experience in data visualisation tool like Looker/Tableau/Quicksight Basic to advance level understanding of statistics Other Skills (Must have) Be willing and able to quickly learn about new businesses, database technologies and analysis techniques Strong oral and written communication Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) Experience working with large datasets Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark What we offer At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Lunch Fully stocked snacks/beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee assistance program Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program) Internet reimbursement Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement NPS employer match Meal card for tax benefit Special benefits on salary account We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. About Profile – Smart Manufacturing And AI (Data Science Engineer) Micron Technology’s vision is to transform how the world uses information to enrich life and our commitment to people, innovation, tenacity, collaboration, and customer focus allows us to fulfill our mission to be a global leader in memory and storage solutions. This means conducting business with integrity, accountability, and professionalism while supporting our global community. Describe the function of the role and how it fits into your department? As a Data Science Engineer at Micron Technology Inc., you will be a key member of a multi-functional team responsible for developing and growing Micron’s methods and systems for applied data analysis, modeling and reporting. You will be collaborating with other data scientists, engineers, technicians and data mining teams to design and implement systems to transform and process data extracted from Micron’s business systems, applying advanced statistical and mathematical methods to analyze the data, creating diagnostic and predictive models, and creating dynamic presentation layers for use by high-level engineers and managers throughout the company. You will be creating new solutions, as well as, supporting, configuring, and improving existing solutions. Why would a candidate love to work for your group and team? We are a Smart Manufacturing and AI organization with a goal to spearhead Industry 4.0 transformation and enable accelerated intelligence and digital operations in the company. Our teams deal with projects to help solve complex real-time business problems that would significantly help improve yield, cycle time, quality and reduce cost of our products. This role also gives a great opportunity to closely work with data scientists, I4.0 analysts and engineers and with the latest big data and cloud-based platforms/skillsets. We highly welcome new ideas and are large proponent of Innovation. What are your expectations for the position? We are seeking Data Science Engineers who are highly passionate about data and associated analysis techniques, can quickly adapt to learning new skills and can design/implement state-of-art Data Science and ML pipelines on-prem and on cloud. You will interact with experienced Data Scientists, Data Engineers, Business Areas Engineers, and UX teams to identify questions and issues for Data Science, AI and Advanced analysis projects and improvement of existing tools. In this position, you will help develop software programs, algorithms and/or automated processes to transform and process data from multiple sources, to apply statistical and ML techniques to analyze data, to discover underlying patterns or improve prediction capabilities, and to deploy advanced visualizations on modern UI platforms. There will be significant opportunities to perform exploratory and new solution development activities Roles & responsibilities can include but are not limited to: Broad Knowledge And Experience In Strong desire to grow career as Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Ability to apply latest mathematical and statistical techniques to analyze data and uncover patterns. Interested to build web application as part of job scope. Knowledge in Cloud based Analytics and Machine Learning Modeling Knowledge in building APIs for application integration. Knowledge in the areas: statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. Data Analysis and Validation skills Strong software development skills. Above Average Skills In Programming Fluency in Python Knowledge in statistics, Machine learning and other advanced analytical methods Knowledge in javascript, AngularJS 2.0, Tableau will be added advantage. Knowledge in OOPS background is added advantage. Understanding of pySpark and/or libraries for distributed and parallel processing is added advantage. Knowledge in Tensorflow, and/or other statistical software including scripting capability for automating analyses Knowledge with time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Understanding of Manufacturing Execution Systems (MES) is a plus Demonstrated Ability To Work in a dynamic, fast-paced, work environment Self-motivated with the ability to work under minimal direction To adapt to new technologies and learn quickly A passion for data and information with strong analytical, problem solving, and organizational skills Work in multi-functional groups, with diverse interests and requirements, to a common objective Communicate very well with distributed teams (written, verbal and presentation) Education Bachelor’s or Master’s Degree in Computer Science,Mathematics, , Data Science and Physics. CGPA requirements = 7.0 CGPA & Above About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Extensive implementation experience in data analytics space or a senior developer role in one of the modern technology stack Excellent programming skills and proficiency in at least one of the major programming scripting languages used in Gen AI orchestration such as Python or PySpark or Java Ability to build API based scalable solutions and debug troubleshoot software or design issues Hands on exposure to integrating atleast one of the popular LLMs Open AI GPT PaLM 2 Dolly Claude 2 Cohere etc using API endpoints Thorough understanding of prompt engineering; implementation exposure to LLM agents like LangChain vector databases Pinecone or Chroma or FAISS Ability to quickly conduct experiments and analyze the features and capabilities of newer versions of the LLM models as they come into market Basic data engineering skills to load structured unstructured data from source systems to target data stores Work closely with Gen AI leads and other team members to address requirements from the product backlog Build and maintain data pipelines and infrastructure to support AI Solutions Desirable Hands on exposure to using cloud Azure GCP AWS services for storage serverless logic search transcription and chat Extensive experience with data engineering and ETL tools is a big plus Masters Bachelors degree in Computer Science or Statistics or Mathematics
Posted 1 week ago
5.0 - 10.0 years
11 - 21 Lacs
Bengaluru
Hybrid
The requirement is a Data Engineer on Google stack, and not Microsoft, AWS or other tools/ Cloud Data Extraction and Analysis: Using Google SQL or Google Big Query to query and manipulate data from databases, and any programming language like Python / Go / Perl to extract and update data, via scripts. Would be good to have Map-Reduce experience. Data Visualization: Creating reports and dashboards using tools like Google Analytics, Google Looker Studio to present findings in a clear and understandable way. Ability to create effective visualizations to communicate insights. Problem Solving: Identifying Data engineering requirements, developing data-pipeline, and using data to propose solutions and recommendations. Performance Optimization: Analyzing data , SQLs and stored procedures to identify areas for improvement in query performance. Applying performance improvement techniques like Indexing , partitioning. Familiar with Data pipeline performance improvements using parallel processing , Caching , Efficient Data storage formats , In-memory computing . Scheduling : Familiar with one job scheduler tool like Dreampipe , Airflow and how to schedule jobs. Release Engineering : Familiar with on-demand and scheduled release management. process
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Job Title: MS Fabric Solution Engineer lead and Architect role Experience: 7-10 Years Location: Remote Budget : 1.2 LPM for 7+Years(lead role) & 1.4 LPM for 8+Years(Architect) Shift : IST JD for MS Fabric Solution Engineer Key Responsibilities: ● Lead the technical design, architecture, and hands-on implementation of Microsoft Fabric PoCs. This includes translating business needs into effective data solutions, often applying Medallion Architecture principles within the Lakehouse.. ● Develop and optimize ELT/ETL pipelines for diverse data sources: o Static data (e.g., CIM XML, equipment models, Velocity Suite data). o Streaming data (e.g., measurements from grid devices, Event Hub and IoT Hub). ● Seamlessly integrate Fabric with internal systems (e.g., CRM, ERP) using RESTful APIs, data mirroring, Azure Integration Services, and CDC (Change Data Capture) mechanisms. ● Hands-on configuration and management of core Fabric components: OneLake, Lakehouse, Notebooks (PySpark/KQL), and Real-Time Analytics databases. ● Facilitate data access via GraphQL interfaces, Power BI Embedded, and Direct Lake connections, ensuring optimal performance for self-service BI and adhering to RLS/OLS. ● Work closely with Microsoft experts, SMEs, and stakeholders. ● Document architecture, PoC results, and provide recommendations for production readiness and data governance (e.g., Purview integration). ______________ Required Skills & Experience: ● 5–10 years of experience in Data Engineering / BI / Cloud Analytics, with at least 1–2 projects using Microsoft Fabric (or strong Power BI + Synapse background transitioning to Fabric). ● Proficient in: o OneLake, Data Factory, Lakehouse, Real-Time Intelligence, Dataflow Gen2 o Ingestion using CIM XML, CSV, APIs, SDKs o Power BI Embedded, GraphQL interfaces o Azure Notebooks / PySpark / Fabric SDK ● Experience with data modeling (asset registry, nomenclature alignment, schema mapping). ● Familiarity with real-time streaming (Kafka/Kinesis/IoT Hub) and data governance concepts. ● Strong problem-solving and debugging skills. ● Prior experience with PoC/Prototype-style projects with tight timelines. ______________ Good to Have: ● Knowledge of grid operations / energy asset management systems. ● Experience working on Microsoft-Azure joint engagements. ● Understanding of AI/ML workflow integration via Azure AI Foundry or similar. ● Relevant certifications: DP-600/700 or DP-203.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 1 week ago
4.0 - 9.0 years
20 - 35 Lacs
Gurugram
Work from Office
Job Description - The candidate should have extensive production experience (2+ Years ) in GCP - Strong background in Data engineering 2-3 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to enterprise application development is a must. Roles & Responsibilities 4-10 years of IT experience range is preferred. Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams. Technical ability to become certified in required GCP technical certifications.
Posted 1 week ago
5.0 - 10.0 years
20 - 30 Lacs
Hyderabad
Work from Office
About Position: Grow your career with an exciting opportunity with us, where you will be a part of creating software solutions that help to change lives - millions of lives. As a Data Engineer , you will have the opportunity to be a member of a focused team dedicated to helping to make the health care system work better for everyone. Here, you'll partner with some of the smartest people you've ever worked with to design solutions to meet a wide range of health consumer needs Role: Azure Data Engineer Location: Hyderabad Experience: 5 to 10 Years Job Type: Full Time Employment What You'll Do: Design and implement scalable ETL/ELT pipelines using Azure Data Factory. Develop and optimize big data solutions using Azure Databricks and PySpark. Write efficient and complex SQL queries for data extraction, transformation, and analysis. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering, including CI/CD, version control, and documentation. Expertise You'll Bring: 3+ years of experience in data engineering with a strong focus on Azure cloud technologies. Proficiency in Azure Data Factory, Azure Databricks, PySpark, and SQL. Experience with data modeling, data warehousing, and performance tuning. Familiarity with version control systems like Git and CI/CD pipelines. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 1 week ago
7.0 - 12.0 years
11 - 21 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Must have experience in ETL/ELT tools and pipeline Working experience with Python libraries Pandas, NumPy, and SQL Alchemy for ETL Strong understanding of Data Warehouse and Development Experience with relational SQL and NoSQL databases.
Posted 1 week ago
7.0 - 12.0 years
25 - 35 Lacs
Kochi, Bengaluru, Thiruvananthapuram
Hybrid
Position: Data Engineer Azure Databricks Experience: 7+ Years Locations: Trivandrum, Kochi, Bangalore No. of Positions: 20 Notice Period: 0 – 15 Days (Strictly) CTC: Up to 40 LPA (Case-to-case basis) Mandatory Skills: Azure Databricks PySpark SQL Python Key Responsibilities: Develop and optimize robust data pipelines using Databricks, PySpark, and Azure Work on complex ETL/ELT processes , transforming and modeling data for analytics and reporting Build scalable data solutions using relational and big data engines Apply strong understanding of data warehousing concepts (e.g., Kimball/Star Schema ) Collaborate with cross-functional teams in Agile environments Ensure clean code, versioning, documentation , and pipeline maintainability Must be able to work on a MacBook Pro (mandatory for script compatibility) Requirements: 7+ years of hands-on experience in data engineering Expertise in Azure cloud platform and Databricks notebooks Proficiency in SQL , Python , and PySpark Good communication and collaboration skills Solid documentation and version control practices Preferred Candidates: Immediate joiners or those with 0–15 days notice Comfortable working from Trivandrum, Kochi, or Bangalore locations Previous experience in data-heavy environments with real-time or batch processing
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes
Posted 1 week ago
5.0 - 9.0 years
0 - 3 Lacs
New Delhi, Pune, Delhi / NCR
Work from Office
Roles and Responsibilities Develop high-quality code in Python using PySpark, SQL, Flink/Spark Streaming and other relevant technologies. Design, develop, test, deploy, and maintain large-scale data processing pipelines using Azure Databricks . Troubleshoot issues related to On-prem(Hadoop) / Databricks clusters, and big data processing tasks Develop complex SQL queries to extract insights from large datasets stored in relational databases such as PostgreSQL. Desired Candidate Profile 6-8 years of experience in software development with expertise in BI & Analytics domain. Bachelor's degree in Any Specialization (B.Tech/B.E.). Strong understanding of cloud computing concepts on Microsoft Azure platform. Proficiency in programming languages such as Python with hands-on experience working with PySpark.
Posted 1 week ago
5.0 - 10.0 years
17 - 20 Lacs
Chennai
Work from Office
Skills required Senior ETL Developer: Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS Pyspark scripting) 1. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. 2. Person should be strong in Pyspark 3. Hands on and working knowledge in Python packages like NumPy, Pandas, Etc 4. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS services is must. 5. Person should work as Individual contributor 6. Good to have familiar with metadata management, data lineage, and principles of data governance. Good to have: 1. Experience to process large set of data transformations both semi and structured data 2. Experience to build data lake & configuration on delta tables. 3. Good experience with computing & cost optimization. 4. Understanding the environment and use case and ready to build holistic Data Integration frame works. 5. Good experience in MWAA (airflow orchestration) Soft skill: 1. Having good communication to interact with IT-Stake holders and Business. 2. Understand the pain point to delivery.
Posted 1 week ago
6.0 - 10.0 years
9 - 17 Lacs
Pune
Work from Office
Strong experience with IBM DataStage for ETL development and data transformation. Proficiency in Azure Data Factory (ADF), Snowflake, Pyspark Interested candidate please fill the google form https://forms.gle/A5ieWPGMFWrCZSGy5
Posted 1 week ago
8.0 - 11.0 years
15 - 25 Lacs
Pune
Work from Office
8+ years of experience in ETL/DW projects, having migration experience and team management having delivery experience with team of 10+ resources. Proven expertise in Snowflake data warehousing, ETL, and data governance.
Posted 1 week ago
4.0 - 6.0 years
12 - 22 Lacs
Pune, Gurugram
Hybrid
Role: Data Engineer Years of Experience: 3-6 Key Skills: Pyspark, SQL, Azure, Python Requirements: 2+ years of hands-on experience with Pyspark development 2 + years of experience in SQL queries Strong SQL and data manipulation skills Azure Cloud experience is good to have
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France