Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
11 - 16 Lacs
Mumbai
Work from Office
About This Role Associate Python Developer (Finance Platform Strategies) We are seeking a technologically adept Data Specialist with a robust Python background and financial acumen to join the Automation & AI team within Finance Platform Strategies (FPS) Our team champions the transformative integration of automation, including Robotic Process Automation (RPA), to streamline and accelerate financial processes, ensuring peak efficiency and workflow optimization, The Finance Platform Strategies (FPS) group, within BlackRocks global Finance & Strategy organization, is responsible for long-term management of Finance platform and technology initiatives, spanning controllers, financial planning, expense management, treasury, tax, and a range of other proprietary and third-party platform capabilities The group drives the strategic vision for and implementation of initiatives to enhance our platform capabilities and delivers day-to-day oversight and management of the platform, with a global footprint, Collaboration is key Youll work closely with partners across the Finance organization, BlackRocks Aladdin Engineering team, Technology & Development Operations (TDO) organization to achieve our goals Join us in shaping the future of finance through innovation and excellence, Core Responsibilities Developing and orchestrating technical solutions with a primary focus on Python frameworks, while also leveraging other technologies such as ETL tools and scripting languages to support the automation of financial processes and workflows, data transformations, and system integrations, Driving projects to completion by understanding requirements and utilizing a wide range of financial applications and automation tools, Ensuring the quality, performance, and reliability of software applications through rigorous testing, debugging, and code reviews, Partnering with functions across the global Finance organization to understand and solution business use cases for automation, Setting up and maintaining servers to support the Python infrastructure, Staying current with the latest developments in Python technologies, as well as industry trends in finance automation, Documenting development work requirements, including technical specifications, user stories, and acceptance criteria, to ensure clear communication and alignment with stakeholders, Mentoring and guiding junior developers to help them grow their skills and knowledge, Working closely with the Aladdin Engineering team and TDO to align technology solutions with business needs, Contributing as a Finance Technology Subject Matter Expert (SME), developing solutions around the inventory of technical tools available within BlackRock, Required Skills And Experience Advanced proficiency in Python technologies, including a deep understanding of frameworks such as Pandas, NumPy and PySpark, to architect and implement robust data transformation solutions, Extensive experience with data modeling, both relational and non-relational, and schema design ( e-g , SQL Server, star and snowflake), Proven expertise in API integration, including RESTful and GraphQL, for data enrichment and processing, Proficiency in data cleaning, normalization, and validation for maintaining high data quality, Strong experience in data science and machine learning, with proficiency in libraries such as Scikit-learn, TensorFlow, or PyTorch, Exposure to Azure Cognitive Services, offering a competitive edge in leveraging AI and machine learning capabilities, Practical experience with cloud platforms, particularly Microsoft Azure, and a solid grasp of cloud services and infrastructure, Proficient in DevOps practices, with experience using tools like Azure DevOps for continuous integration and delivery, Comfortable working in a Linux environment, demonstrating versatility across operating systems, Knowledge of cloud deployment technologies, such as Docker, to facilitate efficient deployment and scaling of applications, Familiarity with real-time data streaming platforms such as Apache Kafka, Understanding of containerization and orchestration technologies like Docker and Kubernetes, Strong command of data governance and security practices, Experience building intuitive and responsive user interfaces using modern frontend technologies such as Streamlit, Dash, Panel, Flask etc Good To Have Experience working with Azure Document Intelligence (ADI) for data processing, Experience with GPT APIs, such as Chat Completion, Familiarity with other programming languages such as C# or Java, adding a valuable dimension to the candidates technical toolkit, Validated experience in software development and the ability to autonomously understand an existing codebase, Curiosity about the functional aspects of the product, with a base knowledge of the finance industry being highly appreciated, Strong analytical and problem-solving skills, with a proactive approach and the ability to balance multiple projects simultaneously, Proficient in English, both written and spoken, Exposure with data visualization tools like Matplotlib, Power BI, or Tableau, Qualifications For candidates in India: B E , b-tech , MCA, or any other relevant engineering degree from a reputed university, A minimum of 5 years of proven experience in the field, Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about, Our hybrid work model BlackRocks hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week Some business groups may require more time in the office due to their roles and responsibilities We remain focused on increasing the impactful moments that arise when we work together in person aligned with our commitment to performance and innovation As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock, About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being Our clients, and the people they serve, are saving for retirement, paying for their childrens educations, buying homes and starting businesses Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress, This mission would not be possible without our smartest investment the one we make in our employees Its why were dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive, For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: linkedin /company/blackrock BlackRock is proud to be an Equal Opportunity Employer We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law, Show
Posted 5 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Description At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Your future duties and responsibilities Your future duties and responsibilities Job Title:Python & PySpark/Spark Developer Position:Python & PySpark/Spark Developer Experience:5+yrs Category: Softare Development Main location: Chennai/Bangalore Position ID: J0625-0234 Employment Type: Full Time Qualification : Bachelor of Engineering Your future duties and responsibilities Position: Python & PySpark/Spark Developer Experience: 6-8 years Location: Chennai(Preferred), Bangalore Shift: UK Shift Job Overview: Capital Markets Technology, Rates IT group is seeking an experienced Software Developer to work on a Risk Services platform supporting the Interest Rates, Structured and Resource Management trading desks. The platform stores risk analytics generated by a proprietary valuation engine and makes them available through a variety of interfaces to Traders, Risk managers, Finance, and others. The system also generates time-sensitive reports for financial and regulatory reporting. What will you do? Work as a member of a global team to build Technology solutions used across the Rates and Resource Management Trading businesses. Design, develop, and maintain reusable Java components for data loading, extracts and transformations. Lead project streams within the group, and mentor others on the team. Participate in requirements gathering and meetings with business stakeholders and other technology groups to produce analysis of the Use Cases and Solutions Designs. Provide second level of support for a Business-critical system Must Have: Strong technical developer with 7+ years hands on experience 4+ years application development experience in Python & PySpark/Spark. 4+ years of experience working on OO principles. Ability to write SQL Queries. Ability to write bash shell scripts. Ability to learn& adapt. Ability to communicate in clear & concise way. Experience in writing Unit test cases & perform thorough unit testing. Experience programming with Spring Boot, Java 8 Experience and Knowledge on Spark Framework, Experience programming in Java/ python and pySpark Familiarity with CI/CD pipelines and frameworks such as Git, Jenkins, maven / ansible etc CI/CD concepts. Unix/Linux basics. REST API basics. Nice to have: Experience in Capital Markets Experience with Spark and HDFS strongly desired Experience with in-memory databases Experience in Agile delivery using Jira Knowledge of Interest/Credit Derivative products, and related trade risk management and/or valuations. Required Qualifications To Be Successful In This Role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What is Blend? Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role? We are looking for a forward-thinking Data & AI Engineer with 1–3 years of experience in data engineering and a passion for using modern AI tools to accelerate development workflows. The ideal candidate is proficient in Python, SQL, PySpark, and has experience working in on-premise big data environments (e.g., Spark, Hadoop, Hive, HDFS). This role is ideal for someone eager to blend traditional data engineering practices with AI-augmented software development, helping us build high-performance pipelines and deliver faster, smarter solutions. What you’ll be doing? Develop and maintain robust ETL/ELT pipelines using Python, SQL, and PySpark. Work with on-premise big data platforms such as Spark, Hadoop, Hive, and HDFS. Optimize and troubleshoot workflows to ensure performance, reliability, and quality. Use AI tools to assist with code generation, testing, debugging, and documentation. Collaborate with data scientists, analysts, and engineers to support data-driven use cases. Maintain up-to-date documentation using AI summarization tools. Apply AI-augmented software engineering practices, including automated testing, code reviews, and CI/CD. Identify opportunities for automation and process improvement across the data lifecycle. What do we need from you? 1–3 years of hands-on experience as a Data Engineer or in a similar data-focused engineering role. Proficiency in Python for data manipulation, automation, and scripting. Solid understanding of SQL and relational database design. Experience building distributed data processing solutions with PySpark. Familiarity with on-premise big data ecosystems, including Hadoop, Hive, HDFS. Active use of AI development tools, such as: GitHub Copilot, Windsurf, Cursor – for intelligent code assistance ChatGPT or similar – for testing support, refactoring, and documentation AI-based testing frameworks or custom scripts Familiar with Git and CI/CD pipelines. Strong analytical skills and a mindset for automation and innovation. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks: Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats: Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards: We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications.
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
6.0 years
0 - 0 Lacs
India
Remote
Title: Data Scientist (Finance) Location: Fully Remote in India Duration: 12 month contract ****ONLY ACCEPTING IMMEDIATE JOINERS OR 30-DAY NOTICE PERIODS***** Required Skills & Experience • 6+ years experience as a Data Scientist building models, data exploration to model building, build blocks start to finish • Experience with data analysis, time series analysis/forecasting & forecasting algorithms, gradient boosting • Strong Python and PySpark programming skills • Experience in deploying and production monitoring ML models • Excellent communication, experience with stakeholder management Nice to Have Skills & Experience • Azure Databricks • AWS cloud experience • Azure Data Factory (ADF) • Understanding of clusters, how they are created • Previous Pepsi experience Job Description: A retail client is looking for a Data Scientists to join their Strategy and Transformation team and sit remotely in India. They will help to build out a complex global financial platform and deploy into production. They will be involved in data ingestion, experimentation, work closely with stakeholders, create model and machine learning pipelines, create dashboards with using PowerBI to review and present to business partners. Compensation: $10.00-12.00/HR USD
Posted 5 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
3.0 - 7.0 years
4 - 8 Lacs
Pune
Work from Office
As a data engineer, you will be responsible for delivering data intelligence solutions to our customers all around the globe, based on an innovative product, which provides insights into the performance of their material handling systems. You will be working on implementing and deploying the product as well as designing solutions to fit it to our customer needs. You will work together with an energetic and multidisciplinary team to build end-to-end data ingestion pipelines and implement and deploy dashboards. Your tasks and responsibilities You will design and implement data & dashboarding solutions to maximize customer value. You will deploy and automate the data pipelines and dashboards to enable further project implementation. You embrace working in an international , diverse team, with an open and respectful atmosphere. You leverage data by making it available for other teams within our department as well to enable our platform vision . Communicate and work closely with other groups within Vanderlande and the project team. You enjoy an independent and self-reliant way of working with a proactive style of communication to take ownership to provide the best possible solution. You will be part of an agile team that encourages you to speak up freely about improvements, concerns, and blockages. As part of Scrum methodology , you will independently create stories and participate in the refinement process. You collect feedback and always search for opportunities to improve the existing standardized product. Execute projects from conception through client handover with a positive contribution on technical performance and the organization. You will take the lead in communication with different stakeholders that are involved in the projects that are being deployed. Your p rofile Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 6 + years of experience building and deploying complex data pipelines and data solutions. Experience developing end to end data pipelines using technologies like Databricks . E xperience with visualization software, preferably Splunk (or else PowerBI , Tableau, or similar). Strong experience with SQL & Python, with hands-on experience in data modeling . Hands-on experience with programming in Python or Java , and proficiency in Test-Driven Development using pytest . Experience with P yspark or S park SQL to deal with distributed data. Experience with d ata s chemas ( e.g. JSON/XML/Avro) . Experience in d eploying services as containers ( e.g. Docker , Podman ) . Experience in w orking with cloud services (preferably with Azure) . Experience with s treaming and/or batch storage ( e.g. Kafka, Oracle) is a plus . Experience in creating API s is a plus . Experience in guiding, motivating and training engineers. Experience in data quality management and monitoring is a plus. Strong communication skills in English. Skilled at breaking down large problems into smaller, manageable parts.
Posted 5 days ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 5 days ago
5.0 - 10.0 years
25 - 40 Lacs
Pune
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 5 days ago
5.0 - 10.0 years
25 - 40 Lacs
Noida
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 5 days ago
10.0 - 17.0 years
12 - 17 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
POSITION OVERVIEW: We are seeking an experienced and highly skilled Data Engineer with deep expertise in Microsoft Fabric , MS-SQL, data warehouse architecture design , and SAP data integration. The ideal candidate will be responsible for designing, building, and optimizing data pipelines and architectures to support our enterprise data strategy. The candidate will work closely with cross-functional teams to ingest, transform, and make data (from SAP and other systems) available in our Microsoft Azure environment, enabling robust analytics and business intelligence. KEY ROLES & RESPONSIBILITIES : Spearhead the design, development, deployment, testing, and management of strategic data architecture, leveraging cutting-edge technology stacks on cloud, on-prem and hybrid environments Design and implement an end-to-end data architecture within Microsoft Fabric / SQL, including Azure Synapse Analytics (incl. Data warehousing). This would also encompass a Data Mesh Architecture. Develop and manage robust data pipelines to extract, load, and transform data from SAP systems (e.g., ECC, S/4HANA, BW). Perform data modeling and schema design for enterprise data warehouses in Microsoft Fabric. Ensure data quality, security, and compliance standards are met throughout the data lifecycle. Enforce Data Security measures, strategies, protocols, and technologies ensuring adherence to security and compliance requirements Collaborate with BI, analytics, and business teams to understand data requirements and deliver trusted datasets. Monitor and optimize performance of data processes and infrastructure. Document technical solutions and develop reusable frameworks and tools for data ingestion and transformation. Establish and maintain robust knowledge management structures, encompassing Data Architecture, Data Policies, Platform Usage Policies, Development Rules, and more, ensuring adherence to best practices, regulatory compliance, and optimization across all data processes Implement microservices, APIs and event-driven architecture to enable agility and scalability. Create and maintain architectural documentation, diagrams, policies, standards, conventions, rules and frameworks to effective knowledge sharing and handover. Monitor and optimize the performance, scalability, and reliability of the data architecture and pipelines. Track data consumption and usage patterns to ensure that infrastructure investment is effectively leveraged through automated alert-driven tracking. KEY COMPETENCIES: Microsoft Certified: Fabric Analytics Engineer Associate or equivalent certificate for MS SQL. Prior experience working in cloud environments (Azure preferred). Understanding of SAP data structures and SAP integration tools like SAP Data Services, SAP Landscape Transformation (SLT), or RFC/BAPI connectors. Experience with DevOps practices and version control (e.g., Git). Deep understanding of SAP architecture, data models, security principles, and platform best practices. Strong analytical skills with the ability to translate business needs into technical solutions. Experience with project coordination, vendor management, and Agile or hybrid project delivery methodologies. Excellent communication, stakeholder management, and documentation skills. Strong understanding of data warehouse architecture and dimensional modeling. Excellent problem-solving and communication skills. QUALIFICATIONS / EXPERIENCE / SKILLS Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. Certifications such as SQL, Administrator, Advanced Administrator, are preferred. Expertise in data transformation using SQL, PySpark, and/or other ETL tools. Strong knowledge of data governance, security, and lineage in enterprise environments. Advanced knowledge in SQL, database procedures/packages and dimensional modeling Proficiency in Python, and/or Data Analysis Expressions (DAX) (Preferred, not mandatory) Familiarity with PowerBI for downstream reporting (Preferred, not mandatory). Experience : • 10 years of experience as a Data Engineer or in a similar role. Skills: Hands-on experience with Microsoft SQL (MS-SQL), Microsoft Fabric including Synapse (Data Warehousing, Notebooks, Spark) Experience integrating and extracting data from SAP systems, such as: o SAP ECC or S/4HANA SAP BW o SAP Core Data Services (CDS) Views or OData Services Knowledge of Data Protection laws across countries (Preferred, not mandatory)
Posted 5 days ago
4.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Title: Senior Machine Learning Engineer We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: We are seeking a highly skilled and experienced Senior Machine Learning Engineer to join our innovative Data Science and Engineering team. Reporting to the Data Science Director, you will play a critical role in building and scaling machine learning systems that power our cybersecurity products. You will work closely with data scientists and software engineers to design, develop, and deploy end-to-end ML pipelines, ensuring high performance and reliability in production. Your engineering expertise will directly contribute to solving complex challenges in areas such as threat detection, malware analysis, anomaly detection, and automated model workflows. About the role: As a Senior Machine Learning Engineer, you will be part of a high-impact team reporting to the Director of Data Science, focused on developing scalable, production-grade machine learning solutions to power cybersecurity products. Collaborate with data scientists and engineering teams to build and scale machine learning pipelines. Develop and deploy production-ready ML systems for use cases such as:Threat and attack detection,Threat hunting,Malware detection,Anomaly detection Build and optimize distributed data pipelines using Apache Spark and Python. Design and implement MLOps workflows including data preprocessing, model training, evaluation, and deployment. Monitor and retrain ML models based on feedback and performance in production. Develop model serving infrastructure for high availability, low latency, and scalability. Automate and streamline feature engineering, hyperparameter tuning, and CI/CD pipelines for ML models. Ensure model integration into security products with adherence to compliance and data governance standards. About you: You are an experienced ML Engineer with a strong foundation in software engineering and a passion for solving complex problems in the cybersecurity domain. Must-Have Qualifications: A Master Degree or Equivalent degree in Machine Learning, Computer Science, or Electrical Engineering, with Mathematics/Statistics at undergraduate level 4-6 years of industry experience as a Machine Learning Engineer Advanced proficiency in Python and strong programming fundamentals Hands-on experience with Apache Spark or PySpark for distributed data processing Solid understanding of machine learning concepts and experience with ML libraries like scikit-learn, XGBoost, TensorFlow, or PyTorch. Experience in building, deploying, and monitoring end-to-end ML systems in production Familiarity with MLOps tools (e.g., MLflow, Kubeflow, SageMaker Pipelines). Proficiency in SQL and handling large-scale data Experience working with public cloud platforms (AWS, GCP, or Azure). Strong software engineering practices: Git, containerisation (Docker), testing, CI/CD Good to Have: Knowledge of Large Language Models (LLMs) such as GPT, BERT, LLaMA, or Falcon. Experience with LLM fine-tuning, prompt engineering, and retrieval-augmented generation (RAG)
Posted 5 days ago
6.0 - 7.0 years
9 - 12 Lacs
Chennai
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Azure/AWS, Synapse Analytics, Fabric & PySpark.
Posted 5 days ago
5.0 - 10.0 years
35 - 40 Lacs
Bengaluru
Work from Office
As a Senior Data Engineer, you will proactively design and implement data solutions that support our business needs while adhering to data protection and privacy standards. In addition to this, you would also be required to manage the technical delivery of the project, lead the overall development effort, and ensure timely and quality delivery. Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.
Posted 5 days ago
0 years
14 - 21 Lacs
Gurugram, Haryana, India
On-site
Our Client is a professional services firm, is the Indian member firm affiliated with International and was established in September 1993. Our professionals leverage the global network of firms, providing detailed knowledge of local laws, regulations, markets, and competition. Our client has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, and Vadodara. Our client in India offers services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused, and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Responsibilities: Build and optimize data pipelines using Python and Pyspark Perform data analysis and generate reports Write and maintain SQL queries for data extraction Requirements Qualifications: Proficiency in Python, Pyspark, and SQL Experience in data analysis and pipeline development Strong analytical and problem-solving abilities Benefits Work with one of the Big 4's in India Healthy work Environment Work-Life Balance
Posted 5 days ago
6.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a highly skilled and proactive Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth. Software Requirements Required: Proficiency in Data Engineering tools and frameworks such as Hive , Apache Spark , and Python (version 3.x) Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/Sub Familiarity with Git , Jira , and Confluence for version control and collaboration Preferred: Experience with additional GCP services like DataProc, Data Studio, or Cloud Composer Exposure to other programming languages such as Java or Scala Knowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needs Collaborate with cross-functional teams to translate business requirements into technical solutions Build and maintain data models, ensuring data quality, integrity, and security Participate actively in code reviews, adhering to best practices and standards Develop automated and efficient data workflows to improve system performance Stay updated with emerging data engineering trends and continuously improve technical skills Provide technical guidance and support to team members, fostering a collaborative environment Ensure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages: EssentialPython (required) PreferredJava, Scala Data Management & Databases: Experience with Hive, BigQuery, and relational databases Knowledge of data warehousing concepts and SQL proficiency Cloud Technologies: Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and Composer Ability to build and optimize data pipelines leveraging GCP offerings Frameworks & Libraries: Spark (PySpark preferred), Hadoop ecosystem experience is advantageous Development Tools & Methodologies: Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on Confluence Security Protocols: Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline development Proven experience in designing and implementing data solutions on cloud platforms, particularly GCP Prior experience working in agile teams, participating in code reviews, and delivering end-to-end data projects Experience working with cross-disciplinary teams and understanding varied stakeholder requirements Exposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessions Collaborate with business analysts, data scientists, and other stakeholders to understand data needs Develop, test, and deploy scalable data pipelines, ensuring efficiency and reliability Perform regular code reviews, provide constructive feedback, and uphold coding standards Document technical solutions and maintain clear records of data workflows Troubleshoot and resolve technical issues in data processing environments Participate in continuous learning initiatives to stay abreast of technological developments Support team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related field Relevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatory Demonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutions Excellent communication and interpersonal skills to effectively collaborate within teams and with stakeholders Ability to work independently with minimal supervision and manage multiple priorities effectively Adaptability to evolving technologies and project requirements Demonstrated initiative in driving tasks forward and continuous improvement mindset Strong organizational skills with a focus on quality and attention to detail S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 5 days ago
4.0 - 9.0 years
8 - 18 Lacs
Chennai, Coimbatore, Vellore
Work from Office
We at Blackstraw.ai. are organizing a Walk-in Interview Drive for Data Engineers with minimum 3 years exp in Data Engineer Data Engineer Mini 3 Years Exp in Python, Spark, PySpark, Hadoop, Hive, Snowflake , AWS , Databricks We are looking for a Data Engineer to join our team. You will use various methods to transform raw data into useful data systems. You'll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and an understanding of machine learning methods. If you are detail-oriented, with excellent organizational skills and experience in this field, wed like to hear from you. Job Requirements Participate in the customer's system design meetings and collect the functional/technical requirements. Responsible to meet the customer expectations on real-time data integrity and implementing efficient solutions A clear understanding of Python, Spark, PySpark, Hive, Kafka, and RDBMS architecture. Experience in writing Spark/Python programs and SQL queries. Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Good to have: Knowledge of CI/CD concepts, Apache Kafka Key traits: Should have excellent communication skills. Should be self-motivated and willing to work as part of a team. Should be able to collaborate and coordinate in a remote environment. Be a problem solver and be proactive to solve the challenges that come his way. Important Instructions: Do carry a hard copy of your resume, one passport photograph, along with a government identity proof for ease of access to our premises. *Please note: Do not carry any electronic devices apart from your mobile phone at office premises.* Please send us your resume to chennai.walkin@blackstraw.ai *Kindly fill up below form to submit you registration form: https://forms.gle/LtNYvGM8pbxMifXw6 Preference will be given for Immediate Joiners or who can join within 10-15 days.
Posted 5 days ago
6.0 - 11.0 years
18 - 33 Lacs
Noida, Pune, Delhi / NCR
Hybrid
Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. Tittle - Sr Data Engineer/ Lead Data Engineer Experience - 5-12 years Location - Delhi/NCR, Pune Shift - 12:30- 9:30 pm IST 6+ years of experience in data engineering with a strong focus on AWS services. Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshiftfor data warehousing and analytics Proficiency in SQL, Python, or PySpark for data processing. Experience with data modeling, partitioning strategies, and performance optimization. Familiarity with orchestration tools like AWS Step Functions, Apache Airflow, or Glue Workflows. If Intersted, Kindly share your resume on kanika.singh@irissoftware.com Note - Notice Period max 1 month
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Gen #NJP
Posted 5 days ago
0 years
14 - 21 Lacs
Greater Kolkata Area
On-site
Our Client is a professional services firm, is the Indian member firm affiliated with International and was established in September 1993. Our professionals leverage the global network of firms, providing detailed knowledge of local laws, regulations, markets, and competition. Our client has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, and Vadodara. Our client in India offers services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused, and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Responsibilities: Build and optimize data pipelines using Python and Pyspark Perform data analysis and generate reports Write and maintain SQL queries for data extraction Requirements Qualifications: Proficiency in Python, Pyspark, and SQL Experience in data analysis and pipeline development Strong analytical and problem-solving abilities Benefits Work with one of the Big 4's in India Healthy work Environment Work-Life Balance
Posted 5 days ago
7.0 - 12.0 years
0 Lacs
Tamil Nadu, India
Remote
Tiger Analytics is global analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modelled around expertise and mutual respect with a team first mindset. Working at Tiger, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire. We are headquartered in the Silicon Valley and have our delivery centres across the globe. The below role is for our Chennai or Bangalore office, or you can choose to work remotely. About The Role As a Program Lead – Healthcare Analytics & Technology, you will be responsible for driving the architecture, delivery, and governance of Azure-based data solutions across multiple programs. You will play a strategic role in data transformation initiatives while mentoring team members and collaborating with stakeholders across functions. The role also requires exposure to advanced analytics, data science, and LLM integration in production environments, along with strong Healthcare domain experience. KRAs If you are looking for an entrepreneurial environment, and are passionate to work on unstructured business problems that can be solved using data, we would like to talk to you. Lead design and implementation of scalable cloud data platforms Enable advanced analytics and AI by operationalizing structured and unstructured data flows Drive data governance, security, and compliance across systems Oversee CI/CD pipelines, DevOps automation, and release management Drive data analysis, insights generation displaying strong knowledge in the Healthcare domain Collaborate with stakeholders to translate business needs into scalable data solutions Mentor team members and ensure technical alignment across cross-functional teams Independently manage multiple projects with high impact and visibility Required Skills, Competencies & Experience 7-12 years of experience in data engineering and analytics, primarily on Azure Strong knowledge and experience of working in the Healthcare industry Deep expertise in ADF, Azure Databricks, Synapse, Delta Lake, and Unity Catalog Strong in data modelling, Python, Pyspark, SQL, and managing all data types Proven experience in implementing CI/CD and DevOps for data projects Familiarity with LLMs, machine learning, and operationalization within Azure Strong leadership, project management, and stakeholder communication skills Certifications such as Azure Solutions Architect or Databricks Data Engineer Professional are preferred Location: Delhi / NCR preferred Designation will be commensurate with expertise/experience. Compensation packages among the best in the industry.
Posted 5 days ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough