Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior Automation QA at Barclays, where you will be responsible for supporting the successful delivery of location strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Senior Automation QA you should have experience with: Spark SQL Python/Pyspark scripting ETL concepts Some Other Highly Valued Skills May Include AWS exposure Jupyter Notebook You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To design, develop, and execute testing strategies to validate functionality, performance, and user experience, while collaborating with cross-functional teams to identify and resolve defects, and continuously improve testing processes and methodologies, to ensure software quality and reliability. Accountabilities Development and implementation of comprehensive test plans and strategies to validate software functionality and ensure compliance with established quality standards. Creation and execution automated test scripts, leveraging testing frameworks and tools to facilitate early detection of defects and quality issues. . Collaboration with cross-functional teams to analyse requirements, participate in design discussions, and contribute to the development of acceptance criteria, ensuring a thorough understanding of the software being tested. Root cause analysis for identified defects, working closely with developers to provide detailed information and support defect resolution. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a ETL Data Engineer at Barclays where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a ETL Data Engineer , you should have experience with: Ab Initio Unix Shell Scripting Oracle PySpark Some Other Highly Valued Skills May Include Python Teradata Java Machine Learning You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. The role is based out of Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
bhubaneswar
On-site
As a Pyspark Developer_VIS, your primary responsibility will be to develop high-performance Pyspark applications for large-scale data processing. You will collaborate with data engineers and analysts to integrate data pipelines and design ETL processes using Pyspark. Optimizing existing data models and workflows to enhance overall performance is also a key aspect of your role. Additionally, you will need to analyze large datasets to derive actionable insights and ensure data quality and integrity throughout the data processing lifecycle. Utilizing SQL for querying databases and validating data is essential, along with working with cloud technologies to deploy and maintain data solutions. You will participate in code reviews, maintain version control, and document all processes, workflows, and system changes clearly. Providing support in resolving production issues and assisting stakeholders, as well as mentoring junior developers on best practices in data processing, are also part of your responsibilities. Staying updated on emerging technologies and industry trends, implementing data security measures, contributing to team meetings, and offering insights for project improvements are other expectations from this role. Qualifications required for this position include a Bachelor's degree in Computer Science, Engineering, or a related field, along with 3+ years of experience in Pyspark development and data engineering. Strong proficiency in SQL and relational databases, experience with ETL tools and data processing frameworks, familiarity with Python for data manipulation and analysis, and knowledge of big data technologies such as Apache Hadoop and Spark are necessary. Experience working with cloud platforms like AWS or Azure, understanding data warehousing concepts and strategies, excellent problem-solving and analytical skills, attention to detail, commitment to quality, ability to work independently and as part of a team, excellent communication and interpersonal skills, experience with version control systems like Git, managing multiple priorities in a fast-paced environment, willingness to learn and adapt to new technologies, strong organizational skills, and meeting deadlines are also essential for this role. In summary, the ideal candidate for the Pyspark Developer_VIS position should possess a diverse skill set including cloud technologies, big data, version control, data warehousing, Pyspark, ETL, Python, Azure, Apache Hadoop, data analysis, Apache Spark, SQL, AWS, and more. ,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Microsoft Azure Data Engineer based in the APAC region, your primary role will involve designing, developing, and maintaining data solutions for APAC Advisory products. Your responsibilities will include ensuring data quality, security, scalability, and performance of data products. Collaborating with product managers, developers, analysts, and stakeholders is essential to understand business requirements and translate them into data models and architectures. You will have the opportunity to work closely with Data Architects, Engineers, and Data Scientists within the organization to support the development and maintenance of various data products. Your expertise will contribute significantly to enhancing product effectiveness and user experiences. Leveraging the organization's data assets for decision-making, analytics, and operational efficiency will be a key focus of your role. Your main responsibilities will revolve around designing, implementing, and managing data pipelines and architectures on the Azure Synapse platform. Utilizing tools such as PySpark, Synapse pipelines, and API integrations, you will be instrumental in developing robust data solutions that align with our business needs. Implementing the Medallion Architecture framework, including Bronze, Silver, and Gold layers, will be crucial for efficient data processing and storage. Key Responsibilities: - Design and implement data pipelines using PySpark Notebooks. - Manage and optimize data storage and processing with the Medallion Architecture framework. - Develop and maintain ETL processes for data ingestion, transformation, and loading. - Integrate APIs to facilitate data exchange between systems. - Create and manage table views for data visualization. - Ensure data quality, consistency, and security across all layers. - Collaborate with stakeholders to understand data requirements and deliver solutions. - Monitor and troubleshoot data pipelines to ensure reliability and performance. - Stay updated with the latest Azure technologies and best practices. Qualifications: - Bachelor's degree in computer science, Information Technology, or a related field. - Proven experience as a Data Engineer focusing on Azure technologies. - Proficiency in PySpark notebooks and Synapse pipelines with a minimum of 2 years of demonstrable experience. - Experience with Medallion Architecture and data management through different layers. - Familiarity with API integration and master data management tools like Profisee. - Ability to create and manage table views for data visualization. - Strong problem-solving skills, attention to detail, and communication skills. - Ability to thrive in a fast-paced, dynamic environment. Preferred Qualifications: - Azure certifications, such as Azure Data Engineer Associate. - Experience with other Azure services like Azure Databricks, Azure Synapse Analytics, and Azure SQL Database. - Knowledge of data governance and security best practices. - Familiarity with additional master data management tools and techniques. INCO: Cushman & Wakefield,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Back-End Developer at our company, you will be responsible for developing an AI-driven prescriptive remediation model for SuperZoom, CBRE's data quality platform. Your primary focus will be on analyzing invalid records flagged by data quality rules and providing suggestions for corrected values based on historical patterns. It is crucial that the model you develop learns from past corrections to continuously enhance its future recommendations. The ideal candidate for this role should possess a solid background in machine learning, natural language processing (NLP), data quality, and backend development. Your key responsibilities will include developing a prescriptive remediation model to analyze and suggest corrections for bad records, implementing a feedback loop for continuous learning, building APIs and backend workflows for seamless integration, designing a data pipeline for real-time processing of flagged records, optimizing model performance for large-scale datasets, and collaborating effectively with data governance teams, data scientists, and front-end developers. Additionally, you will be expected to ensure the security, scalability, and performance of the system in handling sensitive data. To excel in this role, you should have at least 5 years of backend development experience with a focus on AI/ML-driven solutions. Proficiency in Python, including skills in Pandas, PySpark, and NumPy, is essential. Experience with machine learning libraries like Scikit-Learn, TensorFlow, or Hugging Face Transformers, along with a solid understanding of data quality, fuzzy matching, and NLP techniques for text correction, will be advantageous. Strong SQL skills and familiarity with databases such as PostgreSQL, Snowflake, or MS SQL Server are required, as well as expertise in building RESTful APIs and integrating ML models into production systems. Your problem-solving and analytical abilities will also be put to the test in handling diverse data quality issues effectively. Nice-to-have skills for this role include experience with vector databases (e.g., Pinecone, Weaviate) for similarity search, familiarity with LLMs and fine-tuning for data correction tasks, experience with Apache Airflow for workflow automation, and knowledge of reinforcement learning to enhance remediation accuracy over time. Your success in this role will be measured by the accuracy and relevance of suggestions provided for data quality issues in flagged records, improved model performance through iterative learning, seamless integration of the remediation model into SuperZoom, and on-time delivery of backend features in collaboration with the data governance team.,
Posted 3 days ago
6.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
You have the opportunity to join our team as a Data Engineer with expertise in PySpark. You will be based in Kolkata, working in a hybrid model with 3 days in the office. With a minimum of 6 to 14 years of experience, you will play a crucial role in building and deploying Bigdata applications using PySpark. Your responsibilities will include having a minimum of 6 years of experience in building and deploying Bigdata applications using PySpark. You should also have at least 2 years of experience with AWS Cloud, focusing on data integration with Spark and AWS Glue/EMR. A deep understanding of Spark architecture and distributed systems is essential, along with good exposure to Spark job optimizations. Your expertise in handling complex large-scale Big Data environments will be key in this role. You will be expected to design, develop, test, deploy, maintain, and enhance data integration pipelines. Mandatory skills for this role include over 4 years of experience in PySpark, as well as 2+ years of experience in AWS Glue/EMR. A strong grasp of SQL is necessary, along with excellent written and verbal communication skills, and effective time management. Nice-to-have skills include any cloud skills and ETL knowledge. This role offers an exciting opportunity for a skilled Data Engineer to contribute to cutting-edge projects and make a significant impact within our team.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution. You will be responsible for bridging the gap between project and support teams through techno-functional expertise. For new business implementation projects, you will drive the end-to-end process from business requirement management to integration & configuration and production deployment. It will be your responsibility to check the feasibility of the new change requirements and provide optimal solutions to the client with clear timelines. You will provide techno-functional solution support for all new business implementations while building the entire system from scratch. Your role will also involve supporting the solutioning team from architectural design, coding, testing, and implementation. In this role, you must understand the functional design as well as technical design and architecture to be implemented on the ERP system. You will customize, extend, modify, localize, or integrate the existing product through coding, testing, and production. Implementing the business processes, requirements, and the underlying ERP technology to translate them into ERP solutions will also be part of your responsibilities. Writing code as per the developmental standards to decide upon the implementation methodology will be crucial. Providing product support and maintenance to clients for a specific ERP solution and resolving day-to-day queries/technical problems that may arise are also key aspects of this role. Additionally, you will be required to create and deploy automation tools/solutions to ensure process optimization and increase efficiency. Your role will involve bridging technical and functional requirements of the project and providing solutioning/advice to the client or internal teams accordingly. Supporting on-site managers with necessary details regarding any change and providing off-site support will also be expected. Skill upgradation and competency building are essential in this role, including clearing Wipro exams and internal certifications from time to time to upgrade skills. Attending trainings and seminars to enhance knowledge in functional/technical domains and writing papers, articles, case studies, and publishing them on the intranet are also part of the responsibilities. Stakeholder Interaction involves interacting with internal stakeholders such as Lead Consultants and Onsite Project Manager/Project Teams for reporting, updates, and off-site support as per client requirements. External stakeholder interaction includes clients for solutioning and support. Competencies required for this role include Systems Thinking, Leveraging Technology, and Functional/Technical Knowledge at varying competency levels ranging from Foundation to Master. Additionally, behavioral competencies like Formulation & Prioritization, Innovation, Managing Complexity, Client Centricity, Execution Excellence, and Passion for Results are crucial for success in this role. In terms of performance parameters, your contribution to customer projects will be measured based on quality, SLA, ETA, number of tickets resolved, problems solved, number of change requests implemented, zero customer escalation, and CSAT. Automation will be evaluated based on process optimization, reduction in process/steps, and reduction in the number of tickets raised. Skill upgradation will be measured by the number of trainings & certifications completed and the number of papers/articles written in a quarter.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full Stack Developer, you will be responsible for developing and maintaining both front-end and back-end components of web applications. You will utilize the .NET framework and related technologies for server-side development while leveraging React.js to build interactive and responsive user interfaces on the client-side. Your role will involve building and maintaining RESTful APIs to facilitate communication between front-end and back-end systems, as well as implementing authentication, authorization, and data validation mechanisms within APIs. In terms of Database Management, you will design, implement, and manage databases using technologies such as SQL Server or Azure SQL Database. Your responsibilities will include ensuring efficient data storage, retrieval, and manipulation to support application functionality. You will also be involved in Data Pipeline Management, where you will design, implement, and manage data pipelines using technologies such as PySpark, Python, and SQL. Building and maintaining pipelines in Databricks will be part of your tasks. Cloud Services Integration will be a key aspect of your role, requiring you to utilize Azure services for hosting, scaling, and managing web applications. You will implement cloud-based solutions for storage, caching, and data processing, as well as configure and manage Azure resources such as virtual machines, databases, and application services. In terms of DevOps and Deployment, you will implement CI/CD pipelines for automated build, test, and deployment processes using Jenkins. It will be essential to ensure robust monitoring, logging, and error handling mechanisms are in place. Documentation and Collaboration are important aspects of this role, where you will document technical designs, implementation details, and operational procedures. Collaborating with product managers, designers, and other stakeholders to understand requirements and deliver high-quality solutions will be part of your responsibilities. Continuous Learning is encouraged in this role, requiring you to stay updated with the latest technologies, tools, and best practices in web development and cloud computing. You will continuously improve your skills and knowledge through self-learning, training, and participation in technical communities. Requirements for this role include a Bachelor's Degree or equivalent experience, along with 5+ years of software engineering experience in reliable and resilient Microservice development and deployment. Strong knowledge of RESTful API, React.js, Azure, Python, PySpark, Databricks, Typescript, Node.js, relational databases like SQL Server, and No-SQL data store such as Redis and ADLS is essential. Experience with Data Engineering, Jenkins, Artifactory, and Automation testing frameworks is desirable. Prior experience with Agile, CI/CD, Docker, Kubernetes, Kafka, Terraform, or similar technologies is also beneficial. A passion for learning and disseminating new knowledge is highly valued in this role.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a PySpark Developer with expertise in AWS and SQL, your main responsibility will be to create and enhance data pipelines within a cloud setting. Your key duties will include developing ETL workflows using PySpark, constructing and overseeing data pipelines on AWS (including S3, Glue, EMR, Lambda), crafting and fine-tuning SQL queries for data manipulation and reporting, as well as ensuring data quality, performance, and dependability. You will also be expected to collaborate closely with data engineers, analysts, and architects. To excel in this role, you must possess a high level of proficiency in PySpark and SQL, along with hands-on experience with AWS cloud services. Strong problem-solving and debugging skills are crucial, and familiarity with data lake and data warehouse concepts will be an added advantage.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Principal Analyst at Citi's Analytics and Information Management (AIM) team in Bangalore, India, you will play a crucial role in creating client-centric analytical solutions for various business challenges. With a focus on client obsession and stakeholder management, you will be responsible for owning and delivering complex analytical projects. Your expertise in business context understanding, data analysis, and project management will be essential in identifying trends, patterns, and presenting high-quality solutions to senior management. Your primary responsibilities will include developing business critical dashboards, assessing and optimizing marketing programs, sizing the impact of strategic changes, and streamlining existing processes. By leveraging your skills in SQL, Python, Pyspark, Hive, and Impala, you will work with large datasets to extract insights that drive revenue growth and business decisions. Additionally, your experience in Investment Analytics, Retail Analytics, Credit Cards, and Financial Services will be valuable in delivering actionable intelligence to business leaders. To excel in this role, you should possess a master's or bachelor's degree in Engineering, Technology, or Computer Science from premier institutes, along with 5-6 years of experience in delivering analytical solutions. Your ability to articulate and solve complex business problems, along with excellent communication and interpersonal skills, will be key in collaborating with cross-functional teams and stakeholders. Moreover, your hands-on experience in Tableau and project management skills will enable you to mentor and guide junior team members effectively. If you are passionate about data, eager to tackle new challenges, and thrive in a dynamic work environment, this position offers you the opportunity to contribute to Citi's mission of enabling growth and economic progress through innovative analytics solutions. Join us in driving business success and making a positive impact on the financial services industry. Citi is an equal opportunity and affirmative action employer, offering full-time employment in the field of Investment Analytics, Retail Analytics, Credit Cards, and Financial Services. If you are ready to take your analytics career to the next level, we invite you to apply and be part of our global community at Citi.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Senior Machine Learning Engineer Contractor specializing in AWS ML Pipelines, your primary responsibility will be to design, develop, and deploy advanced ML pipelines within an AWS environment. You will work on cutting-edge solutions that automate entity matching for master data management, implement fraud detection systems, handle transaction matching, and integrate GenAI capabilities. The ideal candidate for this role should possess extensive hands-on experience in AWS services like SageMaker, Bedrock, Lambda, Step Functions, and S3. Moreover, you should have a strong command over CI/CD practices to ensure a robust and scalable solution. Your key responsibilities will include designing and developing end-to-end ML pipelines focusing on entity matching, fraud detection, and transaction matching. You will be integrating generative AI solutions using AWS Bedrock to enhance data processing and decision-making. Collaboration with cross-functional teams to refine business requirements and develop data-driven solutions tailored to master data management needs will also be a crucial aspect of your role. In terms of AWS ecosystem expertise, you will be required to utilize SageMaker for model training, deployment, and continuous improvement. Additionally, leveraging Lambda and Step Functions to orchestrate serverless workflows for data ingestion, preprocessing, and real-time processing will be part of your daily tasks. Managing data storage, retrieval, and scalability concerns using AWS S3 will also be within your purview. Furthermore, you will need to develop and integrate automated CI/CD pipelines to streamline model testing, deployment, and version control. Ensuring rapid iteration and robust deployment practices to maintain high availability and performance of ML solutions will be essential. Data security and compliance will be a critical aspect of your role. You will need to implement security best practices to safeguard sensitive data, ensuring compliance with organizational and regulatory requirements. Incorporating monitoring and alerting mechanisms to maintain the integrity and performance of deployed ML models will be part of your responsibilities. Collaboration and documentation will also play a significant role in your day-to-day activities. Working closely with business stakeholders, data engineers, and data scientists to ensure solutions align with evolving business needs will be crucial. You will also need to document all technical designs, workflows, and deployment processes to support ongoing maintenance and future enhancements. Providing regular progress updates and adapting to changing priorities or business requirements in a dynamic environment are expected. To qualify for this role, you should have at least 5+ years of professional experience in developing and deploying ML models and pipelines. Proven expertise in AWS services including SageMaker, Bedrock, Lambda, Step Functions, and S3 is necessary. Strong proficiency in Python and/or PySpark, demonstrated experience with CI/CD tools and methodologies, and practical experience in building solutions for entity matching, fraud detection, and transaction matching within a master data management context are also required. Familiarity with generative AI models and their application within data processing workflows will be an added advantage. Strong analytical and problem-solving skills are essential for this role. You should be able to transform complex business requirements into scalable technical solutions and possess strong data analysis capabilities with a track record of developing models that provide actionable insights. Excellent verbal and written communication skills, the ability to work independently as a contractor while effectively collaborating with remote teams, and a proven record of quickly adapting to new technologies and agile work environments are also preferred qualities for this position. A Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field is a plus. Experience with additional AWS services such as Kinesis, Firehose, and SQS, prior experience in a consulting or contracting role demonstrating the ability to manage deliverables under tight deadlines, and experience within industries where data security and compliance are critical will be advantageous.,
Posted 3 days ago
0.0 - 4.0 years
0 Lacs
karnataka
On-site
We are looking for a Data Engineer to join our data team. You will be responsible for managing our master data set, developing reports, and troubleshooting data issues. To excel in this role, attention to detail, experience as a data analyst, and a deep understanding of popular data analysis tools and databases are essential. Your responsibilities include: - Building, maintaining, and managing data pipelines for efficient data flow between systems. - Collaborating with stakeholders to design and manage customized data pipelines. - Testing various ETL (Extract, Transform, Load) tools for data ingestion and processing. - Assisting in scaling the data infrastructure to meet the organization's growing data demands. - Monitoring data pipeline performance and troubleshooting data issues. - Documenting pipeline architectures and workflows for future reference and scaling. - Evaluating data formats, sources, and transformation techniques. - Working closely with data scientists to ensure data availability and reliability for analytics. We require the following skill sets/experience: - Proficiency in Python, PySpark, and Big Data concepts such as Data Lakes and Data Warehouses. - Strong background in SQL. - Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud. - Basic knowledge of containerization technologies like Docker. - Exposure to data orchestration tools like Apache Airflow or Luigi. Pedigree: - Bachelor's degree in Computer Science, Electrical Engineering, or IT.,
Posted 3 days ago
9.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description This is a remote position. Job Summary We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Responsibilities Design and implement data lake zoning (Raw → Clean → Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modeling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Requirements Essential Skills: Job Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Personal Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines. Preferred Skills Job Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Personal Proactive, ownership-driven mindset with a collaborative approach. Strong communication and collaboration skills. Strong problem-solving skills with attention to detail. Have the ability to work under stringent deadlines and demanding client conditions. Strong analytical and problem-solving skills. Ability to work in fast-paced, delivery-focused environments. Strong mentoring and documentation skills for scaling the platform. Other Relevant Information Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum 9+ years of experience in data engineering & architecture. Benefits This role offers the flexibility of working remotely in India. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 3 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
JOB_POSTING-3-72996-2 Job Description Role Title: AVP, Cloud Solution Architect (L11) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore. Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization. Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose The Cloud Solution Architect – will play a key role in modernizing SAS workloads by leading vendor refactoring efforts, break-fix execution, and user enablement strategies. This position requires a deep understanding of SAS, AWS analytics services (EMR Studio, S3, Redshift, Glue), and Tableau, combined with strong user engagement, training development, and change management skills. The role involves collaborating with vendors, business users, and cloud engineering teams to refactor legacy SAS code, ensure seamless execution of fixes, and develop comprehensive training materials and user job aids. Additionally, the Cloud Solution Architect will oversee user testing, validation, and sign-offs, ensuring a smooth transition to modern cloud-based solutions while enhancing adoption and minimizing disruption. This is an exciting opportunity to lead cloud migration initiatives, enhance analytics capabilities, and drive user transformation efforts within a cutting-edge cloud environment. Key Responsibilities Lead refactoring efforts to modernize and migrate SAS-based workloads to cloud-native or alternative solutions. Oversee break/fix execution by ensuring timely resolution of system issues and performance optimizations. Engage with end-users to gather requirements, address pain points, and ensure smooth adoption of cloud solutions. Develop and deliver custom training programs, including user job aids and self-service documentation. Facilitate user sign-offs and testing by coordinating validation processes and ensuring successful implementation. Drive user communication efforts related to system changes, updates, and migration timelines. Work closely with AWS teams to optimize EMR Studio, Redshift, Glue, and other AWS services for analytics and reporting. Ensure seamless integration with Tableau and other visualization tools to support business reporting needs. Implement best practices for user change management, minimizing disruption and improving adoption. Required Skills/Knowledge Bachelor’s Degree in Computer Science, Software Engineering, or a related field. Advanced degrees (Master’s or Ph.D.) can be a plus but are not always necessary if experience is significant. Experience in scripting languages (Python, SQL, or PySpark) for data transformations. Proven expertise in SAS, including experience with SAS code refactoring and optimization. Strong AWS experience, particularly with EMR Studio, S3, Redshift, Glue, and Lambda. Experience in user change management, training development, and communication strategies. Desired Skills/Knowledge Experience with AWS cloud services. Certifications in AWS or any other cloud platform. Experience with Agile project management methods and practices. Proficiency in Tableau for analytics and visualization. Hands-on experience with cloud migration projects, particularly SAS workloads. Excellent communication and stakeholder engagement skills. Familiarity with other cloud platforms like Azure or GCP is a plus. Eligibility Criteria 10+ years of experience in data analytics, cloud solutions, or enterprise architecture, with a focus on SAS migration and AWS cloud adoption. or in lieu of a degree 12+ years of experience Work Timings: 3 PM to 12 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L9 + Employees can apply. Level / Grade : 11 Job Family Group Information Technology
Posted 3 days ago
4.0 - 11.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 4yrs to 11 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements
Posted 3 days ago
8.0 - 13.0 years
13 - 17 Lacs
Noida, Pune, Bengaluru
Work from Office
Position Summary We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership- Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review- Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking- Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship- Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least twoETL toolsamong Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least twodatabases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware oftechniques such asData Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Designing technical architecture Agile PySpark AWS Data Pipeline Data Modelling Matillion Databricks Location - Noida,Bengaluru,Pune,Hyderabad,India
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code for multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that client requirements are met effectively and efficiently. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with data transformation and ETL processes. - Familiarity with cloud platforms and services related to data processing. - Ability to troubleshoot and optimize performance issues in applications. Additional Information: - The candidate should have minimum 3 years of experience in PySpark. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 4 days ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Description Some careers have more impact than others, If youre looking for a career where you can make a real impression, join HSBC and discover how valued youll be, HSBC is one of the largest banking and financial services organizations in the world, with operations in 62 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions, We are currently seeking an experienced professional to join our team in the role of Senior Analyst Business Consulting Principal Responsibilities The role is specific to the Risk Trigger team which works with the Banking & Financial Crime Risk (BFCR) business to monitor and flag risk associated in the area, Maintain and deliver existing triggers along with focus on identifying and working on new risk areas Adoption to the latest CIB D&A platform and tooling (SQL/Python/Pyspark/DSW etc) Automation of existing book of work Provide support for analytical/strategic projects, Work with the wider Wholesale Business Risk(WBR) team to support ongoing initiatives and POC The jobholder will Provide analytical support and timely delivery of insights to Banking & Financial Crime Risk (BFCR) business to monitor and flag risk associated in the area, Exploratory data analysis, data quality checks, application of basic statistics to help in data driven decision making Post implementation review of ongoing projects/trigger Trend Analysis and Dashboard Creation based on Visualization technique Participate in projects leading to solutions using various analytic techniques, Execute the assigned projects/ analysis as per the agreed timelines and with accuracy and quality, Produce high quality data and reports which support process improvements, decision-making and achievement of performance targets across the respective Business Areas, Complete analysis as required and document results and formally present findings to management Responsible for developing and executing Business Intelligence / analytical initiatives in line with the objectives laid-down by business, using from different structured and unstructured data sources, Requirements Basic data & analytics experience or equivalent Knowledge and understanding of financial services preferred Bachelors or Masters degree from reputed university in Maths/ Stat or other numerical discipline Concentration on Computers, Science or other fields such as engineering Familiarity with analytic systems with SQL and Python skills Strong Microsoft suite skills Qlik Sense, Big Query and GCP knowledge Strong analytical skills and detail oriented Understand basic data quality management principles Good communication skills in both written and spoken Ability to develop and effectively communicate complex concepts and ideas Ability to work in cross-functional teams Strong interpersonal skills and drive for success Independent worker with high drive and can support Global teams with demanding work hour Analytical thought process and aptitude for problem solving Understand business requirements enough to be able to produce, automate, analyses and interpret analysis reports and support the compliance and regulatory requirements Responsible for effectively deliver the projects within timeline and at desired quality level Youll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc, We consider all applications based on merit and suitability to the role ? Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website, Issued By HSBC Electronic Data Processing (India) Private LTD*** Show
Posted 4 days ago
1.0 - 3.0 years
9 - 13 Lacs
Pune
Work from Office
Your Team Responsibilities We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows, Your Key Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance, Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs, Collaborate with team members to gather requirements and document technical solutions, Perform unit testing, data quality checks, and process monitoring activities, Contribute to the creation of stored procedures, functions, and views, Support troubleshooting of pipeline errors and validation issues, Your Skills And Experience That Will Help You Excel Bachelors degree in Computer Science, Engineering, or related discipline, 3+ years of experience in data engineering or internships in data/analytics teams, Working knowledge of Python, SQL, and ideally PySpark, Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP), Strong problem-solving skills and eagerness to learn distributed data processing, Good verbal and written communication skills, About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing, Flexible working arrangements, advanced technology, and collaborative workspaces, A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results, A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients, Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development, Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles, We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Womens Leadership Forum, At MSCI we are passionate about what we do, and we are inspired by our purpose to power better investment decisions Youll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry, MSCI is a leading provider of critical decision support tools and services for the global investment community With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process, MSCI Inc is an equal opportunity employer It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability Assistance@msci and indicate the specifics of the assistance needed Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries, To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes Please do not forward CVs/Resumes to any MSCI employee, location, or website MSCI is not responsible for any fees related to unsolicited CVs/Resumes, Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers Read our full note on careers msci Show
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required., 15 years full time education
Posted 4 days ago
6.0 - 11.0 years
30 - 35 Lacs
Chennai
Hybrid
Data Engineer Lead: Bachelors Degree in Information Systems, Computer Science, or a quantitative discipline such as Mathematics or Engineering and/or equivalent formal training or work experience. Five to Seven (5 -7) years equivalent work experience in measurement and analysis, quantitative business problem-solving, simulation development and/or predictive analytics. Extensive knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Extensive knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience providing leadership in a general planning or consulting setting. Experience as a leader or a senior member of multi-function project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, distributed systems Fluency with distributed and cloud environments and a deep understanding of optimizing computational considerations with theoretical properties Experience in building robust cloud-based data engineering and curation solutions to create data products useful for numerous applications Detailed knowledge of the Microsoft Azure tooling for large-scale data engineering efforts and deployments is highly preferred. Experience with any combination of the following azure tools: Azure Databricks, Azure Data Factory, Azure SQL D, Azure Synapse Analytics Developing and operationalizing capabilities and solutions including under near real-time high- volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value Direct experience having built and deployed robust, complex production systems that implement modern, data processing methods at scale Ability to context-switch, to provide support to dispersed teams which may need an expert hacker” to unblock an especially challenging technical obstacle, and to work through problems as they are still being defined Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews Ability to conduct data analysis, investigation, and lineage studies to document and enhance data quality and access Use of agile and devops practices for project and software management including continuous integration and continuous delivery Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), Kafka and other high-volume data tools SQL and NoSQL storage tools, such as MySQL, Postgres, MongoDB/CosmosDB Java, Python data tools Azure DevOps experience to track work, develop using git-integrated version control patterns, and build and utilize CI/CD pipelines Working knowledge and experience implementing data architecture patterns to support varying business needs Experience with different data types (json, xml, parquet, avro, unstructured) for both batch and streaming ingestions Use of Azure Kubernetes Services, Eventhubs, or other related technologies to implement streaming ingestions Experience developing and implementing alerting and monitoring frameworks Working knowledge of Infrastructure as Code (IaC) through Terraform to create and deploy resources Implementation experience across different data stores, messaging systems, and data processing engines Data integration through APIs and/or REST service PowerPlatform (PowerBI, PowerApp, PowerAutomate) development experience a plus
Posted 4 days ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Custom Software Engineer Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Custom Software Engineer, you will develop custom software solutions to design, code, and enhance components across systems or applications. Your typical day will involve collaborating with cross-functional teams to understand business requirements, utilizing modern frameworks and agile practices to deliver scalable and high-performing solutions tailored to specific business needs. You will engage in problem-solving activities, ensuring that the software solutions meet the highest standards of quality and performance while adapting to evolving project requirements. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve software development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with modern software development methodologies, particularly Agile. - Familiarity with cloud platforms and services for deploying applications. - Ability to troubleshoot and optimize performance in software applications. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required., 15 years full time education
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required.
Posted 4 days ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Greetings !!! Hiring for GCP Data Engineer for Hyderabad Location. Experience - 3 to 8 Skills :- GCP, Pyspark, DAG, Airflow, Python, Teradata (Good to Have) Job location - Hyderabad (WFO) Interested one can share their profiles to anmol.bhatia@incedoinc.com
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough