Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
15 - 27 Lacs
Hyderabad
Work from Office
Dear Candidate, We are pleased to invite you to participate in the EY GDS face to face hiring Event for the position of AWS Data Engineer. Role: AWS Data Engineer Experience Required: 5-8 Years Location - Hyderabad Mode of interview - Face to Face JD - Technical Skills: • Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark • Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL • Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live • Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. • Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. • Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. • Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Kindly confirm your availability by applying to this Job
Posted 1 week ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Pune
Work from Office
Python+ Pyspark (5 to 10 Yrs) joining location Hyderabad and Pune.
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About LIDAS LIDAS (Lam India Data Analytics and Sciences) team is responsible to provide best-in-class analytics solutions that help improve business decisions in Global Operations and other business sub-functions across Lam. This Organization, a “Center of Excellence” (CoE), consists of a high-performing team of experts who collaborate cross-functionally and provide analytics solutions to various business functions to suit their business needs. This team strives to improve the productivity & efficiency of business processes through business analytics, data science & automation projects. The resulting projects help accelerate businesses/stakeholders in decision-making by providing data insights. The team continuously develops the required technical skills and business acumen to help solve complex business problems & Use Cases in the semiconductor, manufacturing, and supply chain domains for the company. Eligibility Criteria Years of Experience: Minimum 9-12 years Job Experience: Experience with Power Platform (Power Apps, Power Automate & Power BI) Expert in Database and Data warehouse tech (Azure Synapse/ SQL Server/SAP HANA) Data Analysis/Data Profiling/Data Visualization Educational: Bachelor’s Degree: Math/Statistics/Operations Research/Computer Science Master’s Degree : Business Analytics (with a background in Computer Science) Primary Responsibilities Ability to interact closely with Business Stakeholder on understanding their business requirements and converting them into opportunity. Leading POCs to create break through technical solutions, performing exploratory and targeted data analyses. Ability to Manage and support existing applications and implementing the best practices on timely Manner. Designs, Develops, and Tests Data Models to import data from source systems to meet Project requirements Effectively analyzes the heterogeneous source data and writes SQL scripts to integrate data from multiple data sources Analyzes the results to generate actionable insights and presents the findings to the business users for informed decision making Understands business requirements and develops dashboards to meet business needs Adapts to the changing business requirements and supports the development and implementation of best-known methods with respect to data analytics Performs Data mining which provides actionable data in response to changing business requirements Migrates data into standardized platforms (Power BI) and builds critical data models to improve process performance and product quality Owns technical implementation and documentation associated with datasets Provides updates on project progress, performs root cause analysis on completed projects and works on identified improvement areas (like process, product quality, performance, etc.) Provides post-implementation support and ensures the target project benefits are successfully delivered in a robust and sustainable fashion. Builds relationships and partners effectively with cross-functional teams to ensure available data is accurate, consistent and timely Independently manages expectations from stakeholders, optimally utilizes time for larger initiatives with minimal guidance on prioritization or dependencies Provides mentorship and guidance to peers and junior team members Mandatory Skills Required To Perform The Job Knowledge on the software development lifecycle expert in translating business requirements into technical solutions; and fanatical about quality, usability, security and scalability Strong Knowledge on Python and PySpark. Specialist in Power Platform (Power Apps & Power Automate) Expert in Reports & Dashboard development (Power BI) and ETL tools (SAP DS, SSIS) Data Analysis skills, experience in extracting information from databases, Office 365 Knowledge of SAP systems (SAP ECC T-Codes & Navigation) Expert in Data Base Development, Troubleshooting & Problem-solving skills (SQL Server, SAP HANA, Azure Synapse) Experience in project requirements gathering and converting business requirements into analytical & technical specs. Good understanding of business processes and experience in Manufacturing/Inventory Management domains Knowledge in performing Root cause analysis and Corrective actions Excellent verbal and written communication & presentation skills, able to communicate cross-functionally Desirable Skills Agile/SCRUM development using any tools. Understanding of enterprise server architecture, cloud platforms Experience in advanced analytics techniques using Statistical analysis Ability to deliver training and presentations in the area of expertise Our Commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential. By bringing unique individuals and viewpoints together, we achieve extraordinary results. Lam Research ("Lam" or the "Company") is an equal opportunity employer. Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws. It is the Company's intention to comply with all applicable laws and regulations. Company policy prohibits unlawful discrimination against applicants or employees. Lam offers a variety of work location models based on the needs of each role. Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories – On-site Flex and Virtual Flex. ‘On-site Flex’ you’ll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week. ‘Virtual Flex’ you’ll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Role Grade Level (for internal use): 10 Responsibilities To work closely with various stakeholders to collect, clean, model and visualise datasets. To create data driven insights by researching, designing and implementing ML models to deliver insights and implement action-oriented solutions to complex business problems To drive ground-breaking ML technology within the Modelling and Data Science team. To extract hidden value insights and enrich accuracy of the datasets. To leverage technology and automate workflows creating modernized operational processes aligning with the team strategy. To understand, implement, manage, and maintain analytical solutions & techniques independently. To collaborate and coordinate with Data, content and modelling teams and provide analytical assistance of various commodity datasets To drive and maintain high quality processes and delivering projects in collaborative Agile team environments. Requirements 7+ years of programming experience particularly in Python 4+ years of experience working with SQL or NoSQL databases. 1+ years of experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. Experience building solutions and working in the Agile working environment Experience working with git or other source control tools Strong understanding of Object-Oriented Programming (OOP) principles and design patterns. Knowledge of clean code practices and the ability to write well-documented, modular, and reusable code. Strong focus on performance optimization and writing efficient, scalable code. Nice To Have Experience working with Oil, gas and energy markets Experience working with BI Visualization applications (e.g. Tableau, Power BI) Understanding of cloud-based services, preferably AWS Experience working with Unified analytics platforms like Databricks Experience with deep learning and related toolkits: Tensorflow, PyTorch, Keras, etc. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314321 Posted On: 2025-05-19 Location: Hyderabad, Telangana, India Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
As a Senior Data Scientist, you will drive data science initiatives from conception to deployment, crafting advanced ML models and providing mentorship to junior colleagues. Collaborate seamlessly across teams to integrate data-driven solutions and maintain data governance compliance. Stay abreast of industry trends, contribute to thought leadership, and innovate solutions for intricate business problems. Responsibilities Lead and manage data science projects from conception to deployment, ensuring alignment with business objectives and deadlines. Develop and deploy AI and statistical algorithms to extract insights and drive actionable recommendations from complex datasets. Provide guidance and mentorship to junior data scientists on advanced analytics techniques, coding best practices, and model interpretation. Design rigorous testing frameworks to evaluate model performance, validate results, and iterate on models to improve accuracy and reliability. Stay updated with the latest advancements in data science methodologies, tools, and technologies, and contribute to the team's knowledge base through sharing insights, attending conferences, and conducting research. Establishing and maintaining data governance policies, ensuring data integrity, security, and compliance with regulations. Qualifications 5+ Years of prior analytics and data science experience in driving projects involving AI and Advanced Analytics. 3+ Years Experience in Deep Learning frameworks, NLP/Text Analytics, SVM, LSTM, Transformers, Neural network. In-depth understanding and hands on experience in working with Large Language Models along with exposure in fine tuning open source models for variety of use case. Strong exposure in prompt engineering, knowledge of vector database, langchain framework and data embeddings. Strong problem-solving skills and the ability to iterate and experiment to optimize AI model behavior. Proficiency in Python programming language for data analysis, machine learning, and web development. Hands-on experience with machine learning libraries such as NumPy, SciPy, and scikit-learn. Experience with distributed computing frameworks such as PySpark and Dask. Excellent problem-solving skills and attention to detail. Ability to communicate effectively with diverse clients/stakeholders. Education Background Bachelor’s in Computer Science, Statistics, Mathematics, or related field. Tier I/II candidates preferred. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
11 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 8-11 Years Location: Bangalore/Hyderabad (Hybrid Model) Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 15 days Generic Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to gather requirements and design solutions for complex business problems. Develop high-quality code in PySpark to process large datasets stored in Azure Data Lake Storage. Troubleshoot issues related to ADF pipeline failures and optimize performance for improved efficiency. Generic Requirements : 8-11 years of experience as an Azure Data Engineer or similar role. Strong expertise in Azure Data Factory (ADF), Azure Databricks, and PySpark programming languages. Experience working on big data processing projects involving ETL processes using Spark-based technologies.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description Job Title: Data Engineer Experience : 5-8 years Job Description We are seeking a highly skilled Data Engineer with expertise in building scalable data pipelines and working with Azure Cloud services. The ideal candidate will be proficient in designing and implementing end-to-end data pipelines, ensuring high performance and scalability. You will leverage Azure services like Azure Data Factory, Azure SQL Database, and Azure Databricks, as well as PySpark for data transformations. This is an excellent opportunity to make a significant impact in an innovative environment. Key Responsibilities Data Pipeline Development: Design, implement, and optimize end-to-end data pipelines on Azure, focusing on scalability, performance, and reliability. ETL Workflow Development: Develop and maintain ETL workflows to ensure seamless and efficient data processing. Azure Cloud Expertise: Utilize Azure services such as Azure Data Factory, Azure SQL Database, and Azure Databricks for effective data engineering solutions. Data Storage Solutions: Implement and manage data storage solutions on Azure, ensuring optimal performance and scalability. Data Transformation: Use PySpark to perform advanced data transformations, ensuring high-quality and well-structured data outputs. Data Cleansing and Enrichment: Implement data cleansing, enrichment, and validation processes using PySpark. Performance Optimization: Optimize data pipelines, queries, and PySpark jobs to enhance overall system performance and scalability. Bottleneck Identification: Identify and resolve performance bottlenecks within data processing workflows, ensuring efficiency. Must Have Skills Azure Expertise: Proven experience with Azure Cloud services, especially Azure Data Factory, Azure SQL Database, and Azure Databricks. PySpark Proficiency: Expertise in PySpark for data processing and analytics. Data Engineering: Strong background in building and optimizing data pipelines and workflows. ETL Processes: Solid experience with data modeling, ETL processes, and data warehousing. Performance Tuning: Ability to optimize data pipelines and jobs to ensure scalability and performance. Problem-Solving: Experience troubleshooting and resolving performance bottlenecks in data workflows. Collaboration: Strong communication skills and ability to work collaboratively in cross-functional teams. Good To Have Skills Azure Certifications: Any relevant Azure certifications will be considered a plus. Advanced Data Transformation: Experience with advanced data transformations using PySpark and other tools. Data Governance: Knowledge of data governance best practices in the context of data engineering and cloud solutions. Big Data Technologies: Familiarity with other big data technologies and tools such as Hadoop, Spark, and Kafka. Data Warehousing: Experience in designing and implementing data warehousing solutions in the Azure environment. Qualification Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. 5-8 years of professional experience as a Data Engineer with a focus on Azure technologies. Skills Azure,Azure Data Factory,Pyspark Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Qualcomm India Private Limited Job Area: Information Technology Group, Information Technology Group > Systems Analysis General Summary: General Summary: Proven experience in testing, particularly in data engineering. Strong coding skills in languages such as Python/ Java Proficiency in SQL and NoSQL databases. Hands on experience in data engineering, ETL processes, and data warehousing QA activities. Design and develop automated test frameworks for data pipelines and ETL processes. Use tools and technologies such as Selenium, Jenkins, and Python to automate test execution. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with data technologies like Data Bricks, Hadoop, PySpark, and Kafka. Understanding of CI/CD pipelines and DevOps practices. Knowledge of containerization technologies like Docker and Kubernetes. Experience with performance testing and monitoring tools. Familiarity with version control systems like Git. Exposure to Agile and DevOps methodologies. Experience on Test cases creation, Functional and regression testing, Defects creation and analyzing root cause. Good verbal and written communication, Analytical, and Problem-solving skills. Ability to work with team members around the globe (US, Taiwan, India, etc...), to provide required support. Overall 10+ years of experience. Principal Duties And Responsibilities: Manages project priorities, deadlines, and deliverables with minimal supervision. Determines which work tasks are most important for self and junior personnel, avoids distractions, and independently deals with setbacks in a timely manner. Understands relevant business and IT strategies, contributes to cross-functional discussion, and maintains relationships with IT and customer peers. Seeks out learning opportunities to increase own knowledge and skill within and outside of domain of expertise. Serves as a technical lead on a sub system or small feature, assigns work to a small project team, and works on advanced tasks to complete a project. Communicates with project lead via email and direct conversation to make recommendations about overcoming impending obstacles. Adapts to significant changes and setbacks in order to manage pressure and meet deadlines independently. Collaborates with more senior Systems Analysts and/or business partners to document and present recommendations for improvements to existing applications and systems. Acts as a technical resource for less knowledgeable personnel Manages projects of small to medium size and complexity, performs tasks, and applies expertise in subject area to meet deadlines. Anticipates complex issues and discusses within and outside of project team to maintain open communication. Identifies test scenarios and/or cases, oversees test execution, and provides QA results to the business across a few projects, and assists with defining test strategies and testing methods, and conducts business risk assessment. Performs troubleshooting, assists on complex issues related to bugs in production systems or applications, and collaborates with business subject matter experts on issues. Assists and/or mentors other team members for training and performance management purposes, disseminates subject matter knowledge, and trains business on how to use tools. Level Of Responsibility: Working under some supervision. Taking responsibility for own work and making decisions that are moderate in impact; errors may have relatively minor financial impact or effect on projects, operations, or customer relationships; errors may require involvement beyond immediate work group to correct. Using verbal and written communication skills to convey complex and/or detailed information to multiple individuals/audiences with differing knowledge levels. Role may require strong negotiation and influence, communication to large groups or high-level constituents. Having moderate amount of influence over key organizational decisions (e.g., is consulted by senior leadership to provide input on key decisions). Using deductive and inductive problem solving is required; multiple approaches may be taken/necessary to solve the problem; often information is missing or incomplete; intermediate data analysis/interpretation skills may be required. Exercising creativity to draft original documents, imagery, or work products within established guidelines. Minimum Qualifications: 4+ years of IT-relevant work experience with a Bachelor's degree. OR 6+ years of IT-relevant work experience without a Bachelor’s degree. Minimum Qualifications: Minimum 6-8 years of proven experience in testing, particularly in data engineering. Preferred Qualifications: Proven experience in testing, particularly in data engineering. 10+ years QA/testing experience. Strong coding skills in languages such as Python/ Java Proficiency in SQL and NoSQL databases. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075333 Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Key Responsibilities: Cloud-Based Development: Design, develop, and deploy scalable solutions using AWS services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Data Processing & Pipelines: Implement efficient data pipelines and optimize data processing using pandas, Spark, and PySpark. Machine Learning Operations (MLOps): Work with model training, model registry, model deployment, and monitoring using AWS SageMaker and related services. Infrastructure-as-Code (IaC): Develop and manage AWS infrastructure using AWS CDK and CloudFormation to enable automated deployments. CI/CD Automation: Set up and maintain CI/CD pipelines using GitHub, AWS CodePipeline, and CodeBuild for streamlined development workflows. Logging & Monitoring: Implement robust monitoring and logging solutions using Splunk, DataDog, and AWS CloudWatch to ensure system performance and reliability. Code Optimization & Best Practices: Write high-quality, scalable, and maintainable Python code while adhering to software engineering best practices. Collaboration & Mentorship: Work closely with cross-functional teams, providing technical guidance and mentorship to junior developers. Qualifications & Requirements 7+ years of experience in software development with a strong focus on Python. Expertise in AWS services, including S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Proficiency in Infrastructure-as-Code (IaC) tools like AWS CDK and CloudFormation. Experience with data processing frameworks such as pandas, Spark, and PySpark. Understanding of machine learning concepts, including model training, deployment, and monitoring. Hands-on experience with CI/CD tools such as GitHub, CodePipeline, and CodeBuild. Proficiency in monitoring and logging tools like Splunk and DataDog. Strong problem-solving skills, analytical thinking, and the ability to work in a fast-paced, collaborative environment. Preferred Skills & Certifications AWS Certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, AWS Certified Machine Learning). Experience with containerization (Docker, Kubernetes) and serverless architectures. Familiarity with big data technologies such as Apache Kafka, Hadoop, or AWS EMR. Strong understanding of distributed computing and scalable architectures. Skills Python,MLOps, AWS Show more Show less
Posted 1 week ago
4.0 - 8.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Hiring for Azure Data Engineer and also the client is looking for Immediate joiners who can join in 30 days.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? NICE provides state-of-the-art enterprise level AI and analytics for all forms of business communications between speech and digital. We are a world class research team developing new algorithms and approaches to help companies with solving critical issues such as identifying their best performing agents, preventing fraud, categorizing customer issues, and determining overall customer satisfaction. If you have interacted with a major contact center in the last decade, it is very likely we have processed your call. The research group partners with all areas of NICE’s business to scale out the delivery of new technology and AI models to customers around the world that are tailored to their company, industry, and language needs. How will you make an impact? Conduct cutting-edge research and develop advanced NLP algorithms and models. Build and fine-tune deep learning and machine learning models, with a focus on large language models. Work closely with internal stakeholders to define model requirements and ensure alignment with business objectives. Develop AI predictive models and perform data and model accuracy analyses. Produce and present findings, technical concepts, and model recommendations to both technical and non-technical stakeholders. Develop and maintain scripts/tools to automate both new model production and updates to existing model packages. Stay abreast of the latest advancements in data science research and contribute to the development of our knowledge base. Collaborate with developers to design automation and tool improvements for model building. Maintain documentation of processes and projects across all supported languages and environments. Have you got what it takes? Master's degree in the field of Computer Science, Technology, Engineering, Math, or equivalent practical experience Minimum of 8 years of data science work experience, including implementing machine learning and NLP models using real-life data. Experience with Retrieval-Augmented Generation (RAG) pipelines or LLMOps. Advanced knowledge of statistics and machine learning algorithms. Proficiency in Python programming and familiarity with R. Experience with deep learning models and libraries such as PyTorch, TensorFlow, and JAX. Familiarity with relational databases and query languages (e.g., MSSQL) and basic SQL knowledge. Hands-on experience with transformer models (BERT, FlanT5, Llama, etc.) and GenAI frameworks (HuggingFace, LangChain, Ollama, etc.). Experience deploying NLP models in production environments, ensuring scalability and performance using AWS/GCP/Azure Strong verbal and written communication skills, including effective presentation abilities. Ability to work independently and as part of a team, demonstrating analytical thinking and problem-solving skills. You will have an advantage if you also have: Expertise with Big Data technologies (e.g., PySpark). Background in knowledge graphs, graph databases, or GraphRAG architectures. Understanding of multimodal models (text, audio, vision). Experience in Customer Experience domains. Experience with package development and technical writing. Familiarity with tools like Jira, Confluence, and source control packages and methodology. Knowledge and interest in foreign languages and linguistics. Experience working on international, globe-spanning teams and with AWS. Past participation in a formal research setting. Experience as part of a software organization. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID : 6969 Reporting into : Tech Manager Role Type : Individual Contributor About NICE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NICE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NICE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NICE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 1 week ago
8.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
What you will do We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, using AWS or other preferred platforms. Lead and motivate an impactful data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. Basic Qualifications: Masters degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelors degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered; Diploma and 14 to 18 years of in computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop impactful data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Company- Material Plus Role- Technical Architect (AWS Glue) Location- Gurgaon (Hybrid) About Us:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Why work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Here’s a bit about who we are and highlights around What we offer. Who We Are & What We Care About:- Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in people's lives. What We Offer:- Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions. Job Description Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python · Implement and manage ETL/ELT processes to ensure seamless data integration and transformation · Ensure information security and compliance with data governance standards · Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems · Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: · Enhancements, new development, defect resolution, and production support of ETL development using AWS native services · Integration of data sets using AWS services such as Glue and Lambda functions. · Utilization of AWS SNS to send emails and alerts · Authoring ETL processes using Python and PySpark · ETL process monitoring using CloudWatch events · Connecting with different data sources like S3 and validating data using Athena. · Experience in CI/CD using GitHub Actions · Proficiency in Agile methodology · Extensive working experience with Advanced SQL and a complex understanding of SQL. Secondary Skills: · Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies. Competencies / Experience: · Deep technical skills in AWS Glue (Crawler, Data Catalog): 10+ years. · Hands-on experience with Python and PySpark: 5+ years. · PL/SQL experience: 5+ years · CloudFormation and Terraform: 5+ years · CI/CD GitHub actions: 5+ year · Experience with BI systems (PowerBI, Tableau): 5+ year · Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 5+ years · Additionally, familiarity with any of the following is highly desirable: Jira, Gi Note- We are looking out for 30 Days or less notice period. Show more Show less
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, data engineers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. You will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative leverages state-of-the-art technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, and management of regulatory content. Roles & Responsibilities: Take ownership of complex software projects from conception to deployment Manage software delivery scope, risk, and timeline Possesses strong rapid prototyping skills and can quickly translate concepts into working code Provide technical guidance and mentorship to junior developers Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Conduct code reviews to ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other stakeholders Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and implement unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field Preferred Qualifications: Must-Have Skills: Proficiency in Python/PySpark development, Fast API, PostgreSQL, Databricks, DevOps Tools, CI/CD, GitlLab, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Experience with AWS services for scalable storage solutions and cloud computing. Strong understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NOSQL database, vector database for large language models Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Hadoop, Spark, or similar Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 3 years Location: Mumbai JobType: full-time About The Role We are looking for an insightful and tech-savvy Data Visualization Analyst with 3–6 years of experience in transforming complex datasets into clear, actionable narratives. If you’re passionate about crafting impactful dashboards, enjoy working with cutting-edge cloud data tools, and thrive in fast-paced environments, this role is for you. You’ll work across functions—partnering with business, engineering, and analytics teams—to design intuitive, scalable, and aesthetically rich dashboards and reports that drive better decisions across the company. What You’ll Do 📊 Visualization Design & Development Create and manage interactive dashboards and data visualizations using tools like Power BI, Tableau, or Looker . Develop custom visuals and reports that are visually appealing, responsive, and tailored to stakeholder needs. 🛠️ Cloud-Based Data Access & Transformation Extract, process, and model large-scale data from Azure Data Lake and Databricks , ensuring performance and accuracy. Collaborate with data engineers to prepare, clean, and transform datasets for reporting and visualization. 🤝 Stakeholder Collaboration Translate business questions into clear analytical visual narratives and performance dashboards. Act as a visualization consultant to product, marketing, and operations teams, understanding their metrics and guiding visual design choices. 🔍 Data Quality & Governance Perform data profiling, validation, and cleansing to ensure data integrity. Maintain documentation and consistency across reports, visuals, and metric definitions. 🚀 Continuous Improvement & Innovation Stay ahead of trends in dashboarding, self-serve analytics, and BI UX best practices. Optimize existing dashboards to enhance performance, usability, and storytelling quality. What You Bring ✔️ Core Skills & Experience 3–6 years of professional experience in data visualization, business intelligence, or analytics. Strong hands-on knowledge of Azure Data Lake , Databricks , and cloud-native data platforms. Advanced proficiency in one or more visualization tools: Power BI , Tableau , Looker , or similar. Solid SQL experience for writing complex queries and transforming datasets. Understanding of data modeling concepts, including star/snowflake schemas and OLAP cubes. 🧠 Nice-to-Have Skills Familiarity with Azure Synapse , Data Factory , or Azure SQL Database . Experience using Python or PySpark for data prep or analytics automation. Exposure to data governance , role-based access control , or data lineage tools. Soft Skills & Traits Strong visual design sense and attention to detail. Ability to explain complex technical topics in simple, business-friendly language. Proactive mindset, keen to take ownership of dashboards from concept to delivery. Comfortable working in agile teams and managing multiple projects simultaneously. Preferred Qualifications Bachelor’s degree in Computer Science, Statistics, Data Analytics, or a related field. Certifications in Azure , Databricks , Power BI , or Tableau are a plus. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Role: Data Engineer Experience: 4-6 yrs Location: Chennai,Bangalore,Pune,Hyderabad,Kochi,Bhubaneshwar Required Skillset =>Should have experience in Pyspark =>Shoud have experience in AWS Glue Interested candidates can send resume to jegadheeswari.m@spstaffing.in or reach me @9566720836
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. Data Engineer – Business Insights and Analytics About the Role: We’re seeking a skilled Data Engineer to help build and evolve our Business Insights and Analytics platform, designed to drive insights for our Software Monetization clients. In this impactful role, you'll work alongside a talented team to design, develop, and scale a modern, data-driven analytics platform. Responsibilities: Transform Requirements: Translate both functional and technical requirements into detailed design specifications. Develop Scalable Solutions: Design, develop, test, deploy, and maintain a robust, scalable data platform for advanced analytics. Drive Innovation: Shape our BI and analytics roadmap, contributing to the conception, implementation, and ongoing evolution of projects. Collaborate Across Teams: Work closely with Product Managers, Data Architects, DevOps Engineers, and other Data Engineers in an agile environment. Problem-Solve Creatively: Tackle complex challenges with curiosity, open-mindedness, and a commitment to continuous improvement. Your Profile: Experience: Multiple years as a Data Engineer with extensive experience in big data, data integration, cloud data warehousing, and data lakes. Technical Skills: Strong experience building data processing pipelines using Databricks. Advanced proficiency in Python, PySpark, and SQL. Practical experience with CI/CD pipelines, Terraform, and cloud build tools (e.g., Google Cloud Build). Knowledge of the GCP ecosystem (Pub/Sub, BigQuery, Cloud Functions, Firestore) is a plus. Familiarity with AI frameworks in Python is advantageous. Experience with data analytics tools (e.g., Power BI, MS Dynamics, Salesforce CRM Analytics) is beneficial. Skills in GoLang and Java are a bonus. Attributes: A collaborative team player who is open, creative, analytical, self-motivated, and solution-oriented. Comfortable in an international, intercultural environment. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now! Show more Show less
Posted 1 week ago
9.0 - 14.0 years
20 - 30 Lacs
Bengaluru
Hybrid
My profile :- linkedin.com/in/yashsharma1608 Hiring manager profile :- on payroll of - https://www.nyxtech.in/ Clinet : Brillio PAYROLL AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience 9 + yrs Notice period Immediate joiner Location Any Brillio location (Preferred is Bangalore) Budget – 30 LPA Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 8 years Location: Mumbai JobType: full-time About The Role We’re looking for an accomplished Assistant Vice President – Data Engineering to lead our enterprise data engineering function as part of the broader Data & Analytics leadership team. This role is suited for a hands-on leader with a strategic mindset—someone who can architect modern data ecosystems, lead high-performing teams, and drive innovation in a cloud-first, analytics-heavy environment. You’ll be responsible for designing and scaling data infrastructure that powers advanced analytics, machine learning, and decision intelligence across the organization. This is a high-impact leadership position at the intersection of technology, data, and business strategy. Key Responsibilities 🔹 Team Leadership & Vision Lead and mentor a team of data engineers, instilling engineering best practices and a culture of quality, collaboration, and innovation. Shape the long-term vision for data engineering, helping define data architecture, tooling strategy, and scalability roadmap. 🔹 Modern Data Infrastructure Design and implement scalable, high-performance data pipelines across batch and real-time workloads using tools like Databricks , PySpark , and Delta Lake . Build foundational architecture for data lakehouse and/or data mesh implementations on modern cloud platforms. 🔹 ETL/ELT & Data Pipeline Management Drive the development of robust ETL/ELT workflows that ingest, transform, cleanse, and enrich data from diverse sources. Implement orchestration and monitoring using tools such as Airflow , Azure Data Factory , or Prefect . 🔹 Data Modeling & SQL Optimization Architect logical and physical data models that support advanced analytics and BI use cases. Write and review complex SQL queries to ensure performance efficiency, scalability, and data consistency. 🔹 Data Quality & Governance Partner with governance and compliance teams to implement data quality frameworks, lineage tracking, and role-based access controls. Ensure all data engineering processes are aligned with data privacy regulations and security standards. 🔹 Cross-Functional Collaboration Serve as a strategic partner to business stakeholders, data scientists, analysts, and product teams to translate data needs into scalable solutions. Communicate data strategy and engineering progress clearly to both technical and non-technical audiences. 🔹 Innovation & Continuous Improvement Stay abreast of emerging technologies in cloud data platforms, streaming, and AI-powered data ops. Lead proof-of-concept initiatives and drive continuous improvement in engineering workflows and infrastructure. Required Experience & Skills 8+ years of hands-on experience in data engineering, including 2+ years in a leadership or senior management role. Deep expertise in Databricks , PySpark , and big data processing frameworks. Advanced proficiency in SQL and experience with designing performant data models. Strong experience building data pipelines on cloud platforms such as Azure , AWS , or Google Cloud Platform . Familiarity with data lakehouse concepts, distributed computing, and modern architecture patterns. Knowledge of version control (e.g., Git ), CI/CD workflows, and observability tools. Excellent communication and leadership skills with an ability to influence across functions and levels. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related discipline. Industry certifications in Databricks , Azure Data Engineer Associate , or equivalent platforms. Experience with data mesh , streaming architectures (Kafka, Spark Streaming) , or lakehouse implementations . Exposure to DataOps practices and data product development frameworks. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Data Pipeline Professionals in the following areas : Experience 3-5 Years Job Description Design, develop, and implement cloud solutions on AWS, utilizing a wide range of AWS services, including Glue ETL, Glue Data Catalog, Athena, Redshift, RDS, DynamoDB, Step Function, Event Bridge, Lambda, API Gateway, ECS, and ECR. Demonstrate expertise in implementing AWS core services, such as EC2, RDS, VPC, ELB, EBS, Route 53, ELB, S3, Dynamo DB, and CloudWatch. Leverage strong Python and PySpark data engineering capabilities to analyze business requirements, translate them into technical solutions, and successful execution. expertise in AWS Data and Analytics Stack, including Glue ETL, Glue Data Catalog, Athena, Redshift, RDS, DynamoDB, Step Function, Event Bridge, Lambda, API Gateway, ECS, and ECR for containerization. In-depth knowledge of AWS core services, such as EC2, RDS, VPC, ELB, EBS, Route 53, ELB, S3, Dynamo DB, and CloudWatch. Develop HLDs, LLDs, test plans, and execution plans for cloud solution implementations, including Work Breakdown Structures (WBS). Interact with customers to understand cloud service requirements, transform requirements into workable solutions, and build and test those solutions. Manage multiple cloud solution projects, demonstrating technical ownership and accountability. Capture and share best-practice knowledge within the AWS solutions architect community. Serve as a technical liaison between customers, service engineering teams, and support. Possess a strong understanding of cloud and infrastructure components, including servers, storage, networks, data, and applications, to deliver end-to-end cloud infrastructure architectures and designs. Effectively collaborate with team members from around the globe Excellent analytical and problem-solving skills. Strong communication and presentation skills. Ability to work independently and as part of a team. Experience working with onshore - offshore teams. Required Behavioral Competencies Accountability : Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration: Participates in team activities and reaches out to others in team to achieve common goals. Agility: Demonstrates a willingness to accept and embrace differing ideas or perceptions which are beneficial to the organization. Customer Focus: Displays awareness of customers stated needs and gives priority to meeting and exceeding customer expectations at or above expected quality within stipulated time. Communication: Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results: Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets Certifications Good To Have At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Let’s be unstoppable together! At Circana, we are fueled by our passion for continuous learning and growth, we seek and share feedback freely, and we celebrate victories both big and small in an environment that is flexible and accommodating to our work and personal lives. We have a global commitment to diversity, equity, and inclusion as we believe in the undeniable strength that diversity brings to our business, employees, clients, and communities. With us, you can always bring your full self to work. Join our inclusive, committed team to be a challenger, own outcomes, and stay curious together. Circana is proud to be Certified™ by Great Place To Work®. This prestigious award is based entirely on what current employees say about their experience working at Circana. Learn more at www.circana.com Role & Responsibilities Evaluate domain, financial and technical feasibility of solution ideas with help of all key stakeholders Design, develop, and maintain highly scalable data processing applications Write efficient, reusable and well documented code Deliver big data projects using Spark, Scala , Python, SQL Maintain and tune existing Spark applications to the fullest. Find opportunities for optimizing existing spark applications. Work closely with QA, Operations and various teams to deliver error free software on time Actively lead / participate daily agile / scrum meetings Take responsibility for Apache Spark development and implementation Translate complex technical and functional requirements into detailed designs Investigate alternatives for data storing and processing to ensure implementation of the most streamlined solutions Serve as a mentor for junior staff members by conducting technical training sessions and reviewing project outputs Qualifications Engineering graduates in computer science backgrounds preferred with 8+ years of software development experience with Hadoop framework components(HDFS, Spark, Spark, Scala, PySpark) Excellent at verbal, written and presentation skills Ability to present and defend a solution with technical facts & business proficiency Understanding of data-warehousing and data-modeling techniques Strong data engineering skills Knowledge of Core Java, Linux, SQL, and any scripting language At least 6+ years of experience using Python / Scala, Spark, SQL Knowledge of shell scripting is a plus Knowledge of Core and Advance Java is a plus. Experience in developing and tuning spark applications Excellent understanding of spark architecture, data frames and tuning spark Strong knowledge of database concepts, systems architecture, and data structures is a must Process oriented with strong analytical and problem solving skills Experience in writing Python standalone applications dealing with PySpark API Knowledge of DELTA.IO package is a plus. Note:- “An offer of employment may be conditional upon successful completion of a background check in accordance with local legislation and our candidate privacy notice . Your current employer will not be contacted without your permission” Show more Show less
Posted 1 week ago
8.0 - 12.0 years
15 - 27 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Traya Health: Traya is an Indian direct-to-consumer hair care brand platform providing a holistic treatment for consumers dealing with hair loss. The Company provides personalised consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathising with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Role Overview: As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Key Responsibilities: ● Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms ● Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources ● Build and support data pipelines for reporting, analytics, and machine learning applications ● Implement and manage streaming data solutions using Kafka and other technologies ● Design and optimize database schemas and data models in ClickHouse and other databases ● Develop and maintain data workflows using Apache Airflow and similar orchestration tools ● Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks ● Collaborate with data scientists to implement ML infrastructure for model training and deployment ● Ensure data quality, reliability, and security across all data platforms ● Monitor data pipelines and implement proactive alerting systems ● Troubleshoot and resolve data infrastructure issues ● Document data flows, architectures, and processes ● Mentor junior data engineers and contribute to establishing best practices ● Stay current with industry trends and emerging technologies in data engineering Qualifications Required ● Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred) ● 5+ years of experience in data engineering roles ● Strong expertise in AWS and/or GCP cloud platforms and services ● Proficiency in building data pipelines using modern ETL/ELT tools and frameworks ● Experience with stream processing technologies such as Kafka ● Hands-on experience with ClickHouse or similar analytical databases ● Strong programming skills in Python and experience with PySpark ● Experience with workflow orchestration tools like Apache Airflow ● Solid understanding of data modeling, data warehousing concepts, and dimensional modeling ● Knowledge of SQL and NoSQL databases ● Strong problem-solving skills and attention to detail ● Excellent communication skills and ability to work in cross-functional teams Preferred ● Experience in D2C, e-commerce, or retail industries ● Knowledge of data visualization tools (Tableau, Looker, Power BI) ● Experience with real-time analytics solutions ● Familiarity with CI/CD practices for data pipelines ● Experience with containerization technologies (Docker, Kubernetes) ● Understanding of data governance and compliance requirements ● Experience with MLOps or ML engineering Technologies ● Cloud Platforms: AWS (S3, Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc) ● Data Processing: Apache Spark, PySpark, Python, SQL ● Streaming: Apache Kafka, Kinesis ● Data Storage: ClickHouse, S3, BigQuery, PostgreSQL, MongoDB ● Orchestration: Apache Airflow ● Version Control: Git ● Containerization: Docker, Kubernetes (optional) What We Offer ● Competitive salary and comprehensive benefits package ● Opportunity to work with cutting-edge data technologies ● Professional development and learning opportunities ● Modern office in Mumbai with great amenities ● Collaborative and innovation-driven culture ● Opportunity to make a significant impact on company growth Show more Show less
Posted 1 week ago
0 years
0 Lacs
Surat, Gujarat, India
On-site
Key Responsibilities: · Requirement Gathering: · Collaborate with stakeholders to gather and document data requirements. · Translate business needs into technical specifications for data solutions. · Data Pipeline Development: · Design, implement, and maintain data pipelines using Microsoft Fabric. · Create and manage ETL processes, ensuring efficient data flow. · SSIS Integration: · Develop and maintain ETL processes using SQL Server Integration Services (SSIS). · Optimize SSIS packages for performance and reliability. · CDC Pipelines: · Implement Change Data Capture (CDC) pipelines to track and manage data changes. · Ensure timely and accurate data updates across systems. · PySpark Proficiency: · Utilize PySpark for data transformation and processing tasks. · Develop data processing scripts to handle large datasets efficiently. · Collaboration: · Work closely with data analysts, data scientists, and other teams to ensure data integrity. · Participate in cross-functional projects to enhance data accessibility and usability. Required Skills: · Technical Proficiency: · Strong experience with Azure Data Factory, Azure Synapse, and Microsoft Fabric. · Proficient in PySpark for data processing and transformation. · Solid understanding of SQL Server and experience in writing complex queries. · Data Modeling: · Knowledge of data modeling concepts and best practices. · Experience in designing data architectures that support business needs. · Cloud Services: · Familiarity with cloud computing concepts and services, particularly within the Azure ecosystem. · Understanding of data storage solutions such as Azure Data Lake and Blob Storage. · Problem-Solving: · Strong analytical skills to troubleshoot data-related issues. · Ability to optimize data workflows for performance and efficiency. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Hi Candidates, We have Job openings in one of our MNC Company -Remote-C2h Please apply here or share updated resume to chandrakala.c@i-q.co AWS Data engineer JD: Data Engineer JD The requirements for the candidate: Data Engineer with a minimum of 3 - 5+ years of experience of data engineering experience. The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimisation using spark SQL and pyspark. Understanding of Code versioning ,Git repository , JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.Role & responsibilities
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2