Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Vishakhapatnam, Andhra Pradesh, India
On-site
Position : Azure Data Engineer Experience : 5+ years Location : Visakhapatnam Primary Skills : Azure Data Factory, Azure Synapse Analytics,PySpark,Scala,CI/CD Job Description: 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF). Proven experience with Databricks, including development in PySpark or Scala. Proficiency in DBT for data modeling and transformation. Expertise in SQL and performance tuning techniques. Solid understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred Qualifications: Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS), Azure Key Vault, and Azure DevOps. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
The data architect is responsible for designing, creating, and managing an organizations data architecture. This role is critical in establishing a solid foundation for data management within an organization, ensuring that data is organized, accessible, secure, and aligned with business objectives. The data architect designs data models, warehouses, file systems and databases, and defines how data will be collected and organized. Responsibilities Interprets and delivers impactful strategic plans improving data integration, data quality, and data delivery in support of business initiatives and roadmaps Designs the structure and layout of data systems, including databases, warehouses, and lakes Selects and designs database management systems that meet the organizations needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures Defines and implements the long-term technology strategy and innovations roadmaps across analytics, data engineering, and data platforms Designs processes for the ETL process from various sources into the organizations data systems Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps Simplifies the existing data architecture, delivering reusable services and cost-saving opportunities in line with the policies and standards of the company Leads and participates in the peer review and quality assurance of project architectural artifacts across the EA group through governance forums Defines and manages standards, guidelines, and processes to ensure data quality Works with IT teams, business analysts, and data analytics teams to understand data consumers needs and develop solutions Evaluates and recommends emerging technologies for data management, storage, and analytics Design, create, and implement logical and physical data models for both IT and business solutions to capture the structure, relationships, and constraints of relevant datasets Build and operationalize complex data solutions, correct problems, apply transformations, and recommend data cleansing/quality solutions Effectively collaborate and communicate with various stakeholders to understand data and business requirements and translate them into data models Create entity-relationship diagrams (ERDs), data flow diagrams, and other visualization tools to represent data models Collaborate with database administrators and software engineers to implement and maintain data models in databases, data warehouses, and data lakes Develop data modeling best practices, and use these standards to identify and resolve data modeling issues and conflicts Conduct performance tuning and optimization of data models for efficient data access and retrieval Incorporate core data management competencies, including data governance, data security and data quality Job Requirements Education: A bachelors degree in computer science, data science, engineering, or related field Experience: At least five years of relevant experience in design and implementation of data models for enterprise data warehouse initiatives Experience leading projects involving data warehousing, data modeling, and data analysis Design experience in Azure Databricks, PySpark, and Power BI/Tableau Skills: Ability in programming languages such as Java, Python, and C/C++ Ability in data science languages/tools such as SQL, R, SAS, or Excel Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP), real-time data distribution (Kafka, Dataflow), and modern data warehouse tools (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Understanding of entity-relationship modeling, metadata systems, and data quality tools and techniques Ability to think strategically and relate architectural decisions and recommendations to business needs and client culture Ability to assess traditional and modern data architecture components based on business needs Experience with business intelligence tools and technologies such as ETL, Power BI, and Tableau Ability to regularly learn and adopt new technology, especially in the ML/AI realm Strong analytical and problem-solving skills Ability to synthesize and clearly communicate large volumes of complex information to senior management of various technical understandings Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs Expert knowledge of data modeling concepts, methodologies, and best practices Proficiency in data modeling tools such as Erwin or ER/Studio Knowledge of relational databases and database design principles Familiarity with dimensional modeling and data warehousing concepts Strong SQL skills for data querying, manipulation, and optimization, and knowledge of other data science languages, including JavaScript, Python, and R Ability to collaborate with cross-functional teams and stakeholders to gather requirements and align on data models Excellent analytical and problem-solving skills to identify and resolve data modeling issues Strong communication and documentation skills to effectively convey complex data modeling concepts to technical and business stakeholders
Posted 1 week ago
3.0 - 8.0 years
7 - 17 Lacs
Mumbai, Pune
Hybrid
Role: Senior Data Engineer Location: Mumbai & Pune Experience: 3yrs to 8yrs Technologies / Skills: Advanced SQL, Python and associated libraries like Pandas, NumPy etc., Pyspark , Shell scripting, Data-Modelling, Big data, Hadoop, Hive, ETL pipelines. Responsibilities: • Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and develop data engineering strategy. • Ability to work with business owners to define key business requirements and convert to user stories with required technical specifications. • Communicate results and business impacts of insight initiatives to key stakeholders to collaboratively solve business problems. • Working closely with the overall Enterprise Data & Analytics Architect and Engineering practice leads to ensure adherence with the best practices and design principles. • Assures quality, security and compliance requirements are met for supported area. • Design and create fault-tolerance data pipelines running on cluster • Excellent communication skills with the ability to influence client business and IT teams • Should have design data engineering solutions end to end. Ability to come up with scalable and modular solutions. Required Qualification: • 3+ years of hands-on experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL in AWS cloud • Experience in design and development of data pipelines and processing of data at scale. • Advanced experience in writing and optimizing efficient SQL queries with Python and Hive handling Large Data Sets in Big-Data Environments • Experience in debugging, tunning and optimizing PySpark data pipelines • Should have implemented concepts and have good knowledge of Pyspark data frames, joins, caching, memory management, partitioning, parallelism etc. • Understanding of Spark UI, Event Timelines, DAG, Spark config parameters, in order to tune the long running data pipelines. • Experience working in Agile implementations • Experience with building data pipelines in streaming and batch mode. • Experience with Git and CI/CD pipelines to deploy cloud applications • Good knowledge of designing Hive tables with partitioning for performance. Desired Qualification: • Experience in data modelling. • Hands on creating workflows on any Scheduling Tool like Autosys, CA Workload Automation. • Proficiency in using SDKsfor interacting with native AWS services. • Strong understanding of concepts of ETL, ELT and data modeling.
Posted 1 week ago
7.5 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Candidate must have cloud knowledge preferred AWS - Must have coding experience in Python and Spark framework - Mandatory SQL knowledge - Good to have exposure to CI/CD, Docker containers - Strong Verbal and written communication - Strong analytical and problem-solving skills. Additional Information: - The candidate should have minimum 7.5 years of experience in PySpark. - This position is based in Chennai. - A 15 years full time education is required. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : PySpark Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary As an Application Lead, you will be responsible for designing, building, and configuring applications. Acting as the primary point of contact, you will lead the development team, oversee the delivery process, and ensure successful project execution. ________________________________________ Roles & Responsibilities Act as a Subject Matter Expert (SME) in application development Lead and manage a development team to achieve performance goals Make key technical and architectural decisions Collaborate with cross-functional teams and stakeholders Provide technical solutions to complex problems across multiple teams Oversee the complete application development lifecycle Gather and analyze requirements in coordination with stakeholders Ensure timely and high-quality delivery of projects ________________________________________ Professional & Technical Skills Must-Have Skills Proficiency in Apache Spark Strong understanding of big data processing Experience with data streaming technologies Hands-on experience in building scalable, high-performance applications Knowledge of cloud computing platforms Must-Have Additional Skills PySpark Spark SQL / SQL AWS ________________________________________ Additional Information This is a full-time, on-site role based in Gurugram Candidates must have a minimum of 5 years of hands-on experience with Apache Spark A minimum of 15 years of full-time formal education is mandatory Show more Show less
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai, Coimbatore, Bengaluru
Hybrid
Job Summary: We are looking for a highly skilled Senior AWS Data Engineer to design, develop, and lead enterprise-grade data solutions on the AWS cloud. This position requires a blend of deep AWS technical proficiency, hands-on PySpark experience, and the ability to engage with business stakeholders in solution design. The ideal candidate will build scalable, secure, and high-performance data platforms using AWS-native tools and best practices. Role & responsibilities: Design and implement scalable AWS cloud-native data architectures, including data lakes, warehouses, and streaming pipelines Develop ETL/ELT pipelines using AWS Glue (PySpark/Scala), Lambda, and Step Functions Optimize Redshift-based data warehouses including schema design, data distribution, and materialized views Leverage Athena, Glue Data Catalog, and S3 for efficient serverless query patterns Implement IAM-based data access control, lineage tracking, and encryption for secure data workflows Automate infrastructure and data deployments using CDK, Terraform, or CloudFormation Drive data modelling standards (Star/Snowflake, 3NF, Data Vault) and ensure data quality and governance Collaborate with data scientists, DevOps, and business stakeholders to deliver end-to-end data solutions Mentor junior engineers and lead code reviews and architecture discussions Participate in client-facing activities including requirements gathering, technical proposal preparation, and solution demos Must-Have Qualifications: AWS Expertise: Proven hands-on experience with AWS Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR, and Cloud Watch PySpark & Big Data: Minimum 2 years of hands-on PySpark/Spark experience for large-scale data processing ETL/ELT Engineering: Expertise in Python, dbt, or similar automation frameworks Data Modelling: Proficiency in designing and implementing normalized and dimensional models Performance Optimization: Ability to tune Spark jobs with custom partitioning, broadcast joins, and memory management CI/CD & Automation: Experience with GitHub Actions, Code Pipeline, or similar tools Consulting & Pre-sales: Prior exposure to client-facing roles including proposal drafting and cost estimation Good-to-Have Skills: Knowledge of Iceberg, Hudi, or Delta Lake file formats Experience with Athena Federated Queries and AWS OpenSearch Familiarity with Data Zone, Data Brew , and data profiling tools Understanding of compliance frameworks like GDPR, HIPAA, SOC2 BI integration skills using Power BI, Quick Sight, or Tableau Knowledge of event-driven architectures (e.g., Kinesis, MSK, Lambda) Exposure to lake house or data mesh architectures Experience with Lucid chart, Miro , or other documentation/storyboarding tools Why Join Us? Work on cutting-edge AWS data platforms Collaborate with a high-performing team of engineers and architects Opportunity to lead key client engagements and shape large-scale solutions Flexible work environment and strong learning culture
Posted 1 week ago
8.0 - 12.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Happiest Minds Technologies Pvt.Ltd is looking for Sr Data and ML Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Spark ML Lib,Scala,Python,Databricks on AWS, Snowflake, GitLab, Jenkins, AWS DevOps CI/CD pipeline, Machine Learning, Airflow
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark Python SQL Strong focus on big data processing which is core to data engineering AWS Cloud Services Lambda Glue S3 IAMIndicates working with cloud based data pipelines Airflow GitHub Essential for orchestration and version control in data workflows
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 week ago
6.0 - 9.0 years
15 - 18 Lacs
Pune
Work from Office
Responsibilities: * Develop data strategies using machine learning & Python * Analyze complex datasets with Power BI & PySpark * Collaborate on cross-functional teams for insights delivery Work from home Flexi working
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Description: Sr. Data Engineer – Big Data The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities •Strong technical, analytical, and problem-solving skills •Strong organizational skills, with the ability to work autonomously as well as in a team-based environment • Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, •CDH On-premise for data processing and extraction •Ability to own and deliver on large, multi-faceted projects •Fluency in complex SQL and experience with RDBMSs • Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems \ •Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera •Unix or Shell scripting •Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations •Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: Python: 3 years (Required) Work Location: In person Show more Show less
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
Role- Pyspark developer Skillset - Python+pyspark Location - PAN india Experience- 7 + Desired Competencies (Technical/Behavioral Competency) Must-Have** ) 1. Strong Hands-On experience with Pyspark technology 2. Strong hands-in experience on Python 3. Strong knowledge of Python web frameworks 4. Good knowledge on SQL and AWS 5. Working in Onsite and Offshore model Good-to-Have 1. Experience in PL/SQL, relational database 2. Experience in AWS (Glue) 3. Exposure with creating Lambda functions, Step functions, ECS Cluster with Fargate, cloud front, cloud trail, API gateway, Amazon Aurora 4. Experience using continuous integration tools like GitHub, SONARQUBE, Checkmarx 5. Experience working in agile (scrum-based model) and tools like Rally / Jira SN Responsibility of / Expectations from the Role 1 Responsible for coding, designing, deploying, and debugging development projects, typically on the server-side (or back-end) 2 Should take part in Analysis, requirement gathering and design 3 Ability to understand the requirements from Functional Technical Spec, work with customer architect/lead for architecture and solution design 4 Depth of knowledge on technical skill to suggest solution design options and best practices 5 Coordinate with QA team and sort out the issue during testing Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Location- Kolkata (Hybrid) Experience Level - 5+ Years Mandatory Skills -Azure Databricks +SQL +Pyspark Primary Roles and Responsibilities : Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications : Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 5+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL , Python and Spark (PySpark) Candidate must have experience in Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role -- Data Modeler Experience -- 6+ Years Location -- Bangalore Notice -- Immediate Joiners only Job Description -- We are seeking a Data Modeler to design, develop, and maintain conceptual, logical, and physical data models that support business needs. The ideal candidate will work closely with data engineers, architects, BA and business stakeholders to ensure data consistency, integrity, and performance across various systems. Key Responsibilities: Design and develop conceptual, logical, and physical data models based on business requirements. Collaborate with business analysts, data engineers, and architects to ensure data models align with business goals. Optimize database design to enhance performance, scalability, and maintainability. Define and enforce data governance standards , including naming conventions, metadata management, and data lineage. Work with ETL and BI teams to ensure seamless data integration and reporting capabilities. Analyze and document data relationships, dependencies, and transformations across various platforms. Maintain data dictionaries and ensure compliance with industry best practices. Azure data engineering stack data modeler needs to be handson with ADF, Azure databricks, SCD, Unity Catalogue and pyspark, power designer, biz designer. Show more Show less
Posted 1 week ago
30.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
For over 30 years, Beghou Consulting has been a trusted adviser to life science firms. We combine our strategic consulting services with proprietary technology to develop custom, data-driven solutions that allow life sciences companies to take their commercial operations to new heights. We are dedicated to client service and offer a full suite of consulting and technology services, all rooted in advanced analytics, to enhance commercial operations and boost sales performance. Purpose of Job You will be responsible and accountable for development and maintenance of data pipelines. You will work with the business team in US for gathering requirements and provide efficient solutions to business requests by using in-house enterprise data platform. We'll trust you to: Design, build, and maintain efficient, reusable, and reliable code Ensure the best performance and quality of applications Identify the issues, and provide solutions to mitigate and address these issues Help maintain code quality, organization, and automation Continuously expand body of knowledge via research Comply with corporate quality policies and procedures Ensure all training requirements are completed in a timely manner You'll need to have: At least 3 years of python programming experience including data transformations using Pandas/Pyspark libraries. CDW (Commercial Data Warehouse) experience in US Pharmaceuticals market is strongly preferred. Experience implementing or supporting HCP and HCO data management, and MDM (Master Data Management) in US pharmaceuticals market is preferred. Having experience working on different Python libraries and willing to learn/work on new libraries. Having advanced analytical and problem-solving skills. Understands the business processes and business data used in data transformations. Having knowledge in SQL queries, Snowflake, Databricks, Azure blob storage and AWS. What you should know: · We treat our employees with respect and appreciation, not only for what they do but who they are. · We value the many talents and abilities of our employees and promote a supportive, collaborative, and dynamic work environment that encourages both professional and personal growth. · You will have the opportunity to work with and learn from all levels in the organization, allowing everyone to work together to develop, achieve, and succeed with every project. · We have had steady growth throughout our history because the people we hire are committed not only to delivering quality results for our clients but also to becoming leaders in sales and marketing analytics. Show more Show less
Posted 1 week ago
7.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS!! TCS is Hiring for Databricks architect Interview Mode: Virtual Required Experience: 7-15 years Work location: Chennai, Kolkata, Hyderabad Must have: Hands on Experience in ADF, Azure Databricks, Pyspark, Azure Data Factory, Unity Catalog, Data migrations, Data Security Good to have - Spark SQL, Spark Streaming, Kafka Hands on in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena, AWS Data Catalog, Amazon Redshift, Amazon Athena, AWS RDS, AWS Glue, AWS EMR (Spark/Hadoop) CI/CD (Code Pipeline, Code Build) Good to have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through DM/E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification (Fulltime): Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract): If shortlisted, will you be available for a virtual interview on 13-Jun-25 (Friday)?: Show more Show less
Posted 1 week ago
30.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
For over 30 years, Beghou Consulting has been a trusted adviser to life science firms. We combine our strategic consulting services with proprietary technology to develop custom, data-driven solutions that allow life sciences companies to take their commercial operations to new heights. We are dedicated to client service and offer a full suite of consulting and technology services, all rooted in advanced analytics, to enhance commercial operations and boost sales performance. Propose of Job You will be responsible for development and maintenance of data pipelines, which are used to ingest and transform data as per business rules. This role works with the business team in US for gathering requirements and provide efficient solutions to business requests by using in-house enterprise data platform. We'll trust you to Design, build, maintain efficient and reliable code. Ensure the best performance for data pipelines. Identify the issues and provide solutions to mitigate and address these issues. Continuously expand body of knowledge via research. Comply with corporate quality policies and procedures. Ensure all training requirements are completed in a timely manner. You'll need to have Minimum of 2 years of experience in the following: CDW (Commercial Data Warehouse) experience in US Pharmaceuticals market is strongly preferred. Proficient in Python programming including use of Pandas or PySpark libraries. Having experience working on different Python libraries and willing to learn/work on new libraries. Experience in implementing or supporting HCP and HCO data management, and MDM (Master Data Management) in US pharmaceuticals market is preferred. Having advanced analytical and problem-solving skills. Having good knowledge on SQL queries preferred. What you should know: · We treat our employees with respect and appreciation, not only for what they do but who they are. · We value the many talents and abilities of our employees and promote a supportive, collaborative, and dynamic work environment that encourages both professional and personal growth. · You will have the opportunity to work with and learn from all levels in the organization, allowing everyone to work together to develop, achieve, and succeed with every project. · We have had steady growth throughout our history because the people we hire are committed not only to delivering quality results for our clients but also to becoming leaders in sales and marketing analytics. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role- Pyspark developer Skillset-Pyspark+Python Location - PAN india experience- 7+ Desired Competencies (Technical/Behavioral Competency) Must-Have** ) 1. Strong Hands-On experience with Pyspark technology 2. Strong hands-in experience on Python 3. Strong knowledge of Python web frameworks 4. Good knowledge on SQL and AWS 5. Working in Onsite and Offshore model Good-to-Have 1. Experience in PL/SQL, relational database 2. Experience in AWS (Glue) 3. Exposure with creating Lambda functions, Step functions, ECS Cluster with Fargate, cloud front, cloud trail, API gateway, Amazon Aurora 4. Experience using continuous integration tools like GitHub, SONARQUBE, Checkmarx 5. Experience working in agile (scrum-based model) and tools like Rally / Jira SN Responsibility of / Expectations from the Role 1 Responsible for coding, designing, deploying, and debugging development projects, typically on the server-side (or back-end) 2 Should take part in Analysis, requirement gathering and design 3 Ability to understand the requirements from Functional Technical Spec, work with customer architect/lead for architecture and solution design 4 Depth of knowledge on technical skill to suggest solution design options and best practices 5 Coordinate with QA team and sort out the issue during testing Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Responsible for Building and maintaining high-performance data systems that enable deeper insights for all parts of our organization Responsible for Developing ETL/ELT pipelines for both batch and streaming data Responsible for Data flow for the real-time and analytics Improving data pipelines performance by implementing the industry’s best practices and different techniques for data parallel processing Responsible for the documentation, design, development and testing of Hadoop reporting and analytical application. Responsible for Technical discussion and finalization of the requirement by communicating effectively with Stakeholder. Responsible for converting functional requirements into the detailed technical design Responsible for adhering to SCRUM timelines and deliver accordingly Responsible for preparing the Unit/SIT/UAT test cases and log the results Responsible for Planning and tracking the implementation to closure Ability to drive enterprise-wide initiatives for usage of external data Envision enterprise-wide Entitlement’s platform and align it with Bank’s NextGen technology vision. Continually looking for process improvements Coordinate between various technical teams for various systems for smooth project execution starting from technical requirements discussion, overall architecture design, technical solution discussions, build, unit testing, regression testing, system integration testing, user acceptance testing, go live, user verification testing and rollback [if required] Prepare technical plan with clear milestone dates for technical tasks which will be input to the PM’s overall project plan. Coordinate with technical teams across technology on need basis who are not directly involved in the project example: Firewall network teams, DataPower teams, EDMP , OAM, OIM, ITSC , GIS teams etc. Responsible to support change management process Responsible to work alongside PSS teams and ensure proper KT sessions are provided to the support teams. Ensure to identify any risks within the project and get that recorded in Risk wise after discussion with business and manager. Ensure the project delivery is seamless with zero to negligible defects. Key Responsibilities Hands on experience with C++, .Net, SQL Language, jQuery, Web API & Service, Postgres SQL & MS SQL server, Azure Dev Ops & related, GitHub, ADO CI/CD Pipeline Should be transversal to handle Linux, PowerShell, Unix shell scripting, Kafka, Spark streaming Hadoop – Hive, Spark, Python, PYSpark Hands on experience of workflow/schedulers like NIFI/Ctrl-m Experience with Data loading tools like sqoop Experience and understanding of Object-oriented programming Motivation to learn innovative trade of programming, debugging, and deploying Self-starter, with excellent self-study skills and growth aspirations, capable of working without direction and able to deliver technical projects from scratch Excellent written and verbal communication skills. Flexible attitude, perform under pressure Ability to lead and influence direction and strategy of technology organization Test driven development, commitment to quality and a thorough approach to work A good team player with ability to meet tight deadlines in a fast-paced environment Guide junior’s developers and share the best practices Having Cloud certification will be an added advantage: any one of Azure/Aws/GCP Must have Knowledge & understanding of Agile principles Must have good understanding of project life cycle Must have Sound problem analysis and resolution abilities Good understanding of External & Internal Data Management & implications of Cloud usage in context of external data Strategy Develop the strategic direction and roadmap for CRES TTO, aligning with Business Strategy, ITO Strategy and investment priorities. Business Work hand in hand with Product Owners, Business Stakeholders, Squad Leads, CRES TTO partners taking product programs from investment decisions into design, specifications, solutioning, development, implementation and hand-over to operations, securing support and collaboration from other SCB teams Ensure delivery to business meeting time, cost and high quality constraints Support respective businesses in growing Return on investment, commercialisation of capabilities, bid teams, monitoring of usage, improving client experience, enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Governance Promote an environment where compliance with internal control functions and the external regulatory framework People & Talent Ability to work with other developers and assist junior team members. Identify training needs and take action to ensure company-wide compliance. Pursue continuing education on new solutions, technology, and skills. Problem solving with other team members in the project. Risk Management Interpreting briefs to create high-quality coding that functions according to specifications. Key stakeholders CRES Domain Clients Functions MT members, Operations and COO ITO engineering, build and run teams Architecture and Technology Support teams Supply Chain Management, Risk, Legal, Compliance and Audit teams External vendors Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Other Responsibilities Embed Here for good and Group’s brand and values in team Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures Multiple functions (double hats) Skills And Experience Technical Project Delivery (Agile & Classic) Vendor Management Stakeholder Management Qualifications 5+ years of lead development role Should have managed a team of minimum 5 members Should have delivered multiple projects end to end Experience in Property Technology products (eg. Lenel, CBRE, Milestone etc) Strong analytical, numerical and problem-solving skills Should be able to understand and communicate technical details of the project Good communication skills – oral and written. Very good exposure to technical projects Eg: server maintenance, system administrator or development or implementation experience Effective interpersonal, relational skills to be able to coach and develop the team to deliver their best Certified Scrum Master About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less
Posted 1 week ago
170.0 years
0 Lacs
Greater Chennai Area
On-site
Area(s) of responsibility Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Role: Lead Data Engineer -AWS Location: Bangalore /Chennai Experience: 5 – 7 Years Job Profile Provide estimates for requirements, analyses and develop as per the requirement. Developing and maintaining data pipelines and ETL (Extract, Transform, Load) processes to extract data efficiently and reliably from various sources, transform it into a usable format, and load it into the appropriate data repositories. Creating and maintaining logical and physical data models that align with the organization's data architecture and business needs. This includes defining data schemas, tables, relationships, and indexing strategies for optimal data retrieval and analysis. Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. Collaborate with downstream application to understand their needs and build the data storage and optimize as per their need. Working closely with other stakeholders and Business to understand data requirements and translate them into technical solutions. Familiar with Agile methodologies and have prior experience working with Agile teams using Scrum/Kanban Lead Technical discussions with customers to find the best possible solutions. Proactively identify and implement opportunities to automate tasks and develop reusable frameworks. Optimizing data pipelines to improve performance and cost, while ensuring a high quality of data within the data lake. Monitoring services and jobs for cost and performance, ensuring continual operations of data pipelines, and fixing of defects. Constantly looking for opportunities to optimize data pipelines to improve performance Must Have Hand on Expertise of 4- 5 years in AWS services like S3, Lambda, Glue, Athena, RDS, Step functions, SNS, SQS, API Gateway, Security, Access and Role permissions, Logging and monitoring Services. Good hand on knowledge on Python, Spark, Hive and Unix, AWS CLI Prior experience in working with streaming solution like Kafka . Prior experience in implementing different file storage types like Delta-lake / Ice-berg. Excellent knowledge in Data modeling and Designing ETL pipeline. Must have strong knowledge in using different databases such as MySQL, Oracle and Writing complex queries. Strong experience working in a continuous integration and Deployment process. Pyspark, AWS ,SQL, Kafka Nice To Have Hand on experience in the Terraform, GIT, GIT Actions. CICD pipeline and Amazon Q. Terraform, GIT, GIT Actions. CICD pipeline , AI Show more Show less
Posted 1 week ago
170.0 years
0 Lacs
Greater Chennai Area
On-site
Area(s) of responsibility Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Role: Lead Data Engineer -AWS Location: Bangalore /Chennai Experience: 5 – 7 Years Job Profile Provide estimates for requirements, analyses and develop as per the requirement. Developing and maintaining data pipelines and ETL (Extract, Transform, Load) processes to extract data efficiently and reliably from various sources, transform it into a usable format, and load it into the appropriate data repositories. Creating and maintaining logical and physical data models that align with the organization's data architecture and business needs. This includes defining data schemas, tables, relationships, and indexing strategies for optimal data retrieval and analysis. Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. Collaborate with downstream application to understand their needs and build the data storage and optimize as per their need. Working closely with other stakeholders and Business to understand data requirements and translate them into technical solutions. Familiar with Agile methodologies and have prior experience working with Agile teams using Scrum/Kanban Lead Technical discussions with customers to find the best possible solutions. Proactively identify and implement opportunities to automate tasks and develop reusable frameworks. Optimizing data pipelines to improve performance and cost, while ensuring a high quality of data within the data lake. Monitoring services and jobs for cost and performance, ensuring continual operations of data pipelines, and fixing of defects. Constantly looking for opportunities to optimize data pipelines to improve performance Must Have Hand on Expertise of 4- 5 years in AWS services like S3, Lambda, Glue, Athena, RDS, Step functions, SNS, SQS, API Gateway, Security, Access and Role permissions, Logging and monitoring Services. Good hand on knowledge on Python, Spark, Hive and Unix, AWS CLI Prior experience in working with streaming solution like Kafka . Prior experience in implementing different file storage types like Delta-lake / Ice-berg. Excellent knowledge in Data modeling and Designing ETL pipeline. Must have strong knowledge in using different databases such as MySQL, Oracle and Writing complex queries. Strong experience working in a continuous integration and Deployment process. Pyspark, AWS ,SQL, Kafka Nice To Have Hand on experience in the Terraform, GIT, GIT Actions. CICD pipeline and Amazon Q. Terraform, GIT, GIT Actions. CICD pipeline , AI Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Software Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: • Develop, test, and maintain scalable backend components and microservices using Python and PySpark. • Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. • Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. • Integrate machine learning models into production-grade backend systems powering innovative AI features. • Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. • Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. • Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: • Proficient in Python for backend development with strong coding standards. • Practical experience with Databricks and PySpark in live production environments. • Advanced knowledge of MySQL database design, query optimization, and maintenance. • Solid foundation in machine learning concepts and deploying ML models in backend systems. • Experience utilizing Redis for effective caching and state management. • Outstanding written and verbal communication abilities with strong attention to detail. • Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: • Background in high-growth AI/ML or complex data engineering projects. • Familiarity with additional backend technologies or cloud-based platforms. • Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Azure Data Engineer Location: Gurugram We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. 🧠 What You'll Do 🔹 Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory 🔹 Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable 🔹 Work on modern data lakehouse architectures and contribute to data governance and quality frameworks 🎯 Tech Stack ☁️ Azure | 🧱 Databricks | 🐍 PySpark | 📊 SQL 👤 What We’re Looking For ✅ 3+ years of experience in data engineering or analytics engineering ✅ Hands-on with cloud data platforms and large-scale data processing ✅ Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Best Regards, Santosh Cherukuri Email: scherukuri@bayonesolutions.com Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Principal Engineer - PySpark This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective You’ll actively contribute to our architecture, design and engineering centre of excellence, collaborating to improve the bank’s overall software engineering capability You’ll gain valuable stakeholder exposure as you build and leverage relationships, as well as the opportunity to hone your technical talents We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be working with software engineers in the production and prototyping of innovative ideas, engaging with domain and enterprise architects to validate and leverage these in wider contexts, by incorporating the relevant architectures. We’ll also look to you to design and develop software with a focus on the automation of build, test and deployment activities, while developing the discipline of software engineering across the business. You’ll Also Be Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Designing, producing, testing and implementing the working code, along with applying Agile methods to the development of software with the use of DevOps techniques The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll need at least ten years of experience working with Python or PySpark with at least four years of team handling experience. You'll need experience in model development and support with expertise in Spark SQL query optimization and performance tuning. You'll also need experience in writing Advance Spark SQL or ANSI SQL queries. Knowledge of AWS will be highly desired. You’ll Also Need A strong background in leading software development teams in a matrix structure, introducing and executing technical strategies Experience in Unix or Linux scripting, Airflow, Continuous Integration, DevOps, GIT and Artifactory Experience in Agile, Test Driven Development approach and software delivery best practice The ability to rapidly and effectively understand and translate product and business requirements into technical solutions A background of working with code repositories, bug tracking tools and wikis Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2