Jobs
Interviews

8325 Pyspark Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 7-8 years of experience, you will be working in a hybrid model based in Chennai in an individual contributor role. You will be utilizing your expertise in SQL, PySpark, NLP (Natural Language Processing), Scrum, and Agile ways of work to extract valuable insights from the bank's data reserves. The purpose of your role is to leverage innovative data analytics and machine learning techniques to inform strategic decision-making, enhance operational efficiency, and foster innovation across the organization. Your key responsibilities will include identifying, collecting, and extracting data from various sources, both internal and external. You will be involved in data cleaning, wrangling, and transformation to ensure data quality for analysis. Developing and maintaining efficient data pipelines for automated data acquisition and processing will be part of your routine tasks. You will design and implement statistical and machine learning models to analyze patterns, trends, and relationships within the data. Furthermore, you will be creating predictive models to forecast future outcomes and identify potential risks and opportunities. Collaboration with business stakeholders to extract value from data through Data Science will also be a crucial aspect of your role. In this role, you are expected to consult on complex issues and provide advice to People Leaders, contributing towards the resolution of escalated issues. Your responsibilities will also include identifying ways to mitigate risks, developing new policies and procedures to support the control and governance agenda. Taking ownership of managing risks and strengthening controls related to your work is imperative. Collaboration with other work areas aligned with business support is essential to stay informed about business activities and strategies. Engaging in complex data analysis from multiple internal and external sources, such as procedures and practices, to creatively solve problems effectively will be part of your daily tasks. You will also be required to communicate complex information, influencing stakeholders to achieve desired outcomes. Preferred qualifications for this role include experience in Python for creating test analysis, understanding surveys, and survey analysis. Competency in statistics, including linear regressions, t-test, and logistic regression, is essential. Peer review skills for code methods or class constants are also valuable assets. If you meet the requirements and are interested in this opportunity, please share your CV at akansha.prometheus@gmail.com.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

kochi, kerala

On-site

The ideal candidate for the role of Data Architect should have at least 8+ years of experience in Modern Data Architecture, RDBMS, ETL, NoSQL, Data warehousing, Data Governance, Data Modeling, and Performance Optimization, along with proficiency in Azure/AWS/GCP. Primary skills include defining architecture & end-to-end development of Database/ETL/Data Governance processes. It is essential for the candidate to possess technical leadership skills and provide mentorship to junior team members. The candidate must have hands-on experience in 3 to 4 end-to-end projects involving Modern Data Architecture and Data Governance. Responsibilities include defining the architecture for Data engineering projects and Data Governance systems, designing, developing, and supporting Data Integration applications using Azure/AWS/GCP Cloud platforms, and implementing performance optimization techniques. Proficiency in advanced SQL and experience in modeling/designing transactional and DWH databases is required. Adherence to ISMS policies and procedures is mandatory. Good to have skills include Python, Pyspark, and Power BI. The candidate is expected to onboard by 15/01/2025 and possess a Bachelor's Degree qualification. The role entails ensuring the performance of all duties in accordance with the company's policies and procedures.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Level 9 Industry and Functional AI Decision Science Consultant at Accenture Strategy & Consulting within the Global Network - Data & AI team, your primary responsibility will be to assist clients in the Comms & Media - Telecom practice by designing and implementing AI solutions for their business needs. You will leverage your expertise in the Telco domain, AI fundamentals, and hands-on experience with large datasets to deliver valuable insights and recommendations to key stakeholders. Your role will involve proposing solutions based on comprehensive gap analysis of existing Telco platforms, identifying long-term value propositions, and translating business requirements into functional specifications. By collaborating closely with client stakeholders through interviews and workshops, you will gather essential insights to address their unique challenges and opportunities. In addition to understanding the current processes and potential issues within the Telco environment, you will be responsible for designing future state solutions that leverage Data & AI capabilities effectively. Your ability to analyze complex problems systematically, anticipate obstacles, and establish a clear project roadmap will be crucial to driving successful outcomes for clients. Furthermore, you will act as a strategic partner to clients, aligning their business goals with innovative AI-driven strategies to enhance revenue growth and operational efficiency. Your expertise in storytelling through data analysis will enable you to craft compelling narratives that resonate with senior stakeholders and drive informed decision-making. To excel in this role, you should possess a minimum of 5 years of experience in Data Science, with at least 3 years dedicated to Telecom Analytics. A postgraduate degree from a reputable institution and proficiency in data mining, statistical analysis, and advanced predictive modeling techniques are essential qualifications for this position. Your hands-on experience with various analytical tools and programming languages, such as Python, R, and SQL, will be instrumental in delivering impactful solutions to clients. As a proactive and collaborative team player, you will actively engage with cross-functional teams, mentor junior members, and uphold a high standard of excellence in client interactions. Your strong analytical skills, problem-solving capabilities, and ability to work independently across multiple projects will be key to your success in this dynamic and fast-paced environment. While cloud platform certifications and experience in Computer Vision are considered advantageous, your commitment to continuous learning and staying abreast of industry trends will be highly valued in this role. Overall, your dedication to delivering value-driven AI solutions and fostering long-lasting client relationships will be instrumental in driving success for both Accenture and its clients.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as an ETL Developer at Barclays, where you will be responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality, and governance standards. Spearheading the evolution of our digital landscape, you will drive innovation and excellence using cutting-edge technology to revolutionize our digital offerings and ensure unparalleled customer experiences. To be successful in this role, you should have experience with: - Good knowledge of Python - Extensive hands-on PySpark - Understanding of Data Warehousing concepts - Strong SQL knowledge - Proficiency in Bigdata technologies (HDFS) - Exposure to AWS working environment Additionally, highly valued skills may include: - Working knowledge of AWS - Familiarity with Bigdata As an ETL Developer at Barclays, you may be assessed on key critical skills relevant for success in this role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, in addition to job-specific technical skills. This role is based in Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes, ensuring accuracy, accessibility, and security of all data. Accountabilities: - Building and maintaining data architecture pipelines for transfer and processing of durable, complete, and consistent data - Designing and implementing data warehouses and data lakes to manage data volumes, velocity, and security measures - Developing processing and analysis algorithms suitable for data complexity and volumes - Collaborating with data scientists to build and deploy machine learning models Analyst Expectations: - Impacting the work of related teams within the area - Partnering with other functions and business areas - Taking responsibility for end results of team's operational processing and activities - Escalating policy/procedure breaches appropriately - Embedding new policies/procedures for risk mitigation - Advising and influencing decision-making within expertise area - Managing risk and strengthening controls in work areas - Delivering work in line with relevant rules, regulations, and codes of conduct - Building an understanding of integration with function and organization's products, services, and processes - Demonstrating how areas contribute to achieving organization objectives - Resolving problems and selecting solutions based on technical experience - Guiding and persuading team members and communicating complex/sensitive information - Acting as a contact point for stakeholders outside the function and building a network of contacts All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive.,

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us Welcome to FieldAssist, where Innovation meets excellence!! We are a top-tier SaaS platform that specializes in optimizing Route-to-Market strategies and enhancing brand relationships within the CPG partner ecosystem. With over 1,00,000 sales users representing over 600+ CPG brands across 10+ countries in South East Asia, the Middle East, and Africa, we reach 10,000 distributors and 7.5 million retail outlets every day. FieldAssist is a 'Proud Partner to Great Brands' like Godrej Consumers, Saro Africa, Danone, Tolaram, Haldiram’s, Eureka Forbes, Bisleri, Nilon’s, Borosil, Adani Wilmar, Henkel, Jockey, Emami, Philips, Ching’s and Mamaearth among others. Do you crave a dynamic work environment where you can excel and enjoy the journey? We have the perfect opportunity for you!! Responsibilities Build and maintain robust backend services and REST APIs using Python (Django, Flask, or FastAPI). Develop end-to-end ML pipelines including data preprocessing, model inference, and result delivery. Integrate and scale AI/LLM models, including RAG (Retrieval Augmented Generation) and intelligent agents. Design and optimize ETL pipelines and data workflows using tools like Apache Airflow or Prefect. Work with Azure SQL and Cosmos DB for transactional and NoSQL workloads. Implement and query vector databases for similarity search and embedding-based retrieval (e.g., Azure Cognitive Search, FAISS, or Pinecone). Deploy services on Azure Cloud, using Docker and CI/CD practices. Collaborate with cross-functional teams to bring AI features into product experiences. Write unit/integration tests and participate in code reviews to ensure high code quality. e and maintain applications using the .NET platform and environment Who we're looking for: Strong command of Python 3.x, with experience in Django, Flask, or FastAPI. Experience building and consuming RESTful APIs in production systems. Solid grasp of ML workflows, including model integration, inferencing, and LLM APIs (e.g., OpenAI). Familiarity with RAG, vector embeddings, and prompt-based workflows. Proficient with Azure SQL and Cosmos DB (NoSQL). Experience with vector databases (e.g., FAISS, Pinecone, Azure Cognitive Search). Proficiency in containerization using Docker, and deployment on Azure Cloud. Experience with data orchestration tools like Apache Airflow. Comfortable working with Git, CI/CD pipelines, and observability tools. Strong debugging, testing (pytest/unittest), and optimization skills. Good to Have: Experience with LangChain, transformers, or LLM fine-tuning. Exposure to MLOps practices and Azure ML. Hands-on experience with PySpark for data processing at scale. Contributions to open-source projects or AI toolkits. Background working in startup-like environments or cross-functional product teams. FieldAssist on the Web: Website: https://www.fieldassist.com/people-philosophy-culture/ Culture Book: https://www.fieldassist.com/fa-culture-book CEO's Message: https://www.youtube.com/watch?v=bl_tM5E5hcw LinkedIn: https://www.linkedin.com/company/fieldassist/

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. You will be part of a team of highly skilled professionals working with cutting-edge technologies. Our purpose is to bring real positive changes in an increasingly virtual world, transcending generational gaps and disruptions of the future. We are seeking AWS Glue Professionals with the following qualifications: - 3 or more years of experience in AWS Glue, Redshift, and Python - 3+ years of experience in engineering with expertise in ETL work with cloud databases - Proficiency in data management and data structures, including writing code for data reading, transformation, and storage - Experience in launching spark jobs in client mode and cluster mode, with knowledge of spark job property settings and their impact on performance - Proficiency with source code control systems like Git - Experience in developing ELT/ETL processes for loading data from enterprise-sized RDBMS systems such as Oracle, DB2, MySQL, etc. - Coding proficiency in Python or expertise in high-level languages like Java, C, Scala - Experience in using REST APIs - Expertise in SQL for manipulating database data, familiarity with views, functions, stored procedures, and exception handling - General knowledge of AWS Stack (EC2, S3, EBS), IT Process Compliance, SDLC experience, and formalized change controls - Working in DevOps teams based on Agile principles (e.g., Scrum) - ITIL knowledge, especially in incident, problem, and change management - Proficiency in PySpark for distributed computation - Familiarity with Postgres and ElasticSearch At YASH, you will have the opportunity to build a career in an inclusive team environment. We offer career-oriented skilling models and leverage technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace is grounded in four principles: - Flexible work arrangements, free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Support for the realization of business goals - Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring ETL (Extract, Transform, Load) Professionals with the following requirements: **Experience:** 8-10 Years **Job Description:** - 8 to 10 years of experience in designing and developing reliable solutions. - Ability to work with business partners and provide long-lasting solutions. - Minimum 5 years of experience in Snowflake. - Strong knowledge in Any ETL, Data Modeling, and Data Warehousing. - Minimum 2 years of work experience on Data Vault modeling. - Strong knowledge in SQL, PL/SQL, and RDBMS. - Domain knowledge in Manufacturing / Supply chain / Sales / Finance areas. - Good to have Snaplogic knowledge or project experience. - Good to have cloud platform knowledge AWS or Azure. - Good to have knowledge in Python/Pyspark. - Experience in Data migration / Modernization projects. - Zeal to pick up new technologies and do POCs. - Ability to lead a team to deliver the expected business results. - Good analytical and strong troubleshooting skills. - Excellent communication and strong interpersonal skills. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles: - Flexible work arrangements, Free spirit, and emotional positivity. - Agile self-determination, trust, transparency, and open collaboration. - All Support needed for the realization of business goals. - Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking a Senior Data Engineer who is proficient in Azure Databricks, PySpark, and distributed computing to create and enhance scalable ETL pipelines specifically for manufacturing analytics. Your responsibilities will include working with industrial data to support real-time and batch data processing needs. Your role will involve constructing scalable real-time and batch processing workflows utilizing Azure Databricks, PySpark, and Apache Spark. You will be responsible for data pre-processing tasks such as cleaning, transformation, deduplication, normalization, encoding, and scaling to guarantee high-quality input for downstream analytics. Designing and managing cloud-based data architectures, like data lakes, lakehouses, and warehouses, following the Medallion Architecture, will also be part of your duties. You will be expected to deploy and optimize data solutions on Azure, AWS, or GCP, focusing on performance, security, and scalability. Developing and optimizing ETL/ELT pipelines for structured and unstructured data sourced from IoT, MES, SCADA, LIMS, and ERP systems and automating data workflows using CI/CD and DevOps best practices for security and compliance will also be essential. Monitoring, troubleshooting, and enhancing data pipelines for high availability and reliability, as well as utilizing Docker and Kubernetes for scalable data processing, will be key aspects of your role. Collaboration with automation teams will also be required for effective project delivery. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, with a specific requirement for IIT Graduates. You should possess at least 4 years of experience in data engineering with a focus on cloud platforms like Azure, AWS, or GCP. Proficiency in PySpark, Azure Databricks, Python, Apache Spark, and expertise in various databases (relational, time series, and NoSQL) is necessary. Experience in containerization tools like Docker and Kubernetes, strong analytical and problem-solving skills, familiarity with MLOps and DevOps practices, excellent communication and collaboration abilities, and the flexibility to adapt to a dynamic startup environment are desirable qualities for this role.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an Azure Data Engineer (Databricks Specialist) at CrossAsyst, you will be part of our high-impact Data & AI team, working on critical client-facing projects. With over 5 years of experience, you will utilize your expertise in Azure data services and Databricks to build robust, scalable data pipelines and drive technology innovation for our clients in Pune. Your key responsibilities will include designing, developing, and deploying end-to-end data pipelines using Azure Databricks, Data Factory, and Synapse. You will be responsible for data ingestion, transformation, and wrangling from various sources, optimizing Spark jobs and Databricks notebooks for performance and cost-efficiency. Implementing DevOps best practices for CI/CD, Git integration, and automated testing will be essential in your role. Collaborating with cross-functional teams such as data scientists, architects, and stakeholders, you will design scalable data lakehouse and data warehouse solutions using Delta Lake and Synapse. Ensuring data security, access control, and compliance using Azure-native governance tools will also be a part of your responsibilities. Additionally, you will work closely with data science teams for feature engineering and machine learning workflows within Databricks. Your proactive mindset and strong coding ability in PySpark will be crucial in writing efficient SQL and PySpark code for analytics and transformation tasks. It will also be essential to proactively monitor and troubleshoot data pipelines in production environments. In this role, documenting solution architectures, workflows, and data lineage will contribute to the successful delivery of scalable, secure, and high-performance data solutions. If you are looking to make an impact by driving technology innovation and delivering better and faster outcomes, we welcome you to join our team at CrossAsyst.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As the leading bank in Asia, DBS Consumer Banking Group is in a unique position to help our customers realize their dreams and ambitions. With a full spectrum of products and services, including deposits, investments, insurance, mortgages, credit cards, and personal loans, we strive to support our customers at every life stage. Our financial solutions are tailored to meet your needs and aspirations effectively. In line with this vision, a strong techno-functional team has been established in Hyderabad under the Data chapter COE to support Consumer Banking Groups (CBG) analytics interventions. You will play a crucial role in this team by: - Understanding business requirements and objectives clearly. - Handling ad-hoc requests, providing impactful insights, and presenting effectively. - Developing Qlik Sense Dashboards to visualize data effectively. - Examining issues and errors to enhance platform stability. - Supporting end-to-end business initiatives to drive outcomes and performance. - Conducting data extraction and investigation to derive meaningful conclusions. - Improving productivity and reducing employee toil through data analytics. - Demonstrating good communication skills to interact with customers regularly. - Working collaboratively as part of a team and independently under supervision. - Utilizing problem-solving skills and a proactive approach to address challenges effectively. To be successful in this role, you should meet the following requirements: - Possess 5+ years of experience in data and business analytics. - Proficiency in Python, Py-spark, and SQL for data processing and analysis. - Familiarity with BI and analytical tools such as Qlikview, Qliksense, etc. - Knowledge of data architecture, particularly S3. - Strong interpersonal and organizational skills with clear communication abilities. - Results-driven personality with an analytical mindset and innovative thinking. - Ability to analyze and modify code to ensure successful execution. - Capability to manage multiple priorities and deadlines efficiently. - Preferred domain knowledge in Credit & Debit cards. - Demonstrated expertise in process management, encompassing evaluation, design, execution, measurement, monitoring, and control of business processes.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an AWS Senior Data Engineer (SDE) at Infosys in India, you will be responsible for working on various technologies and tools related to cloud data engineering. Your role will involve expertise in SQL, Pyspark, API endpoint ingestion, Glue, S3, Redshift, Step Functions, Lambda, Cloudwatch, AppFlow, CloudFormation, and administrative tasks related to cloud services. Additionally, you will be expected to have knowledge of SDLF & OF frameworks, S3 ingestion patterns, and exposure to Git, Jfrog, ADO, SNOW, Visual Studio, DBeaver, and SF inspector. Your primary focus will be on leveraging these technologies to design, develop, and maintain data pipelines, ensuring efficient data processing and storage on the cloud platform. The ideal candidate for this position should have a strong background in cloud data engineering, familiarity with AWS services, and a proactive attitude towards learning and implementing new technologies. Excellent communication skills and the ability to work effectively within a team are essential for success in this role.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Zinnia is the leading technology platform for accelerating life and annuities growth, simplifying the experience of buying, selling, and administering insurance products. Our success is driven by a commitment to three core values: be bold, team up, deliver value. With over $180 billion in assets under administration, serving 100+ carrier clients, 2500 distributors, and partners, Zinnia enables more people to protect their financial futures. We are looking for an experienced Data Engineer to join our data engineering team. Your role will involve designing, building, and optimizing robust data pipelines and platforms that power our analytics, products, and decision-making. You will collaborate with data scientists, analysts, product managers, and other engineers to deliver scalable, efficient, and reliable data solutions. Your responsibilities will include designing, developing, and maintaining scalable big data pipelines using Spark (Scala or PySpark), Hive, and HDFS. You will also build and manage data workflows and orchestration using Airflow, write efficient production-grade code in languages like Python, Java, or Scala, and develop complex SQL queries for data transformation and reporting. Additionally, you will work on cloud platforms like AWS to deploy and manage data infrastructure and collaborate with data stakeholders to deliver high-quality data solutions. To be successful in this role, you should have strong experience with the Big Data stack, excellent programming skills, expertise in SQL, hands-on experience with Spark tuning and optimization, and familiarity with Airflow for data workflow orchestration. A degree in Computer Science, Engineering, or a related field, along with at least 5 years of experience as a Data Engineer, is required. You should also have a proven track record of delivering production-ready data pipelines in big data environments and possess strong analytical thinking, problem-solving, and communication skills. Preferred or nice-to-have skills include knowledge of the AWS ecosystem, experience with Trino or Presto for interactive querying, familiarity with Lakehouse formats, exposure to DBT for analytics engineering, experience with Kafka for streaming ingestion, and familiarity with monitoring tools like Prometheus and Grafana. Joining our team as a Data Engineer will provide you with the opportunity to work on cutting-edge technologies, collaborate with a diverse group of professionals, and contribute to impactful projects that shape the future of insurance technology.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. We are hiring for the position of Quant Developer at Associate Manager/Manager level for SS&C GlobeOp Financial Services with office locations in Mumbai, Hyderabad, Pune, and Gurgaon. The ideal candidate should have experience in agile/scrum project management along with strong proficiency in Python, SQL, and KDB. Additional experience in Databricks, Fabric, PySpark/Tensorflow, C/C++, and other data management tools such as Arctic, Mongo, and Dashboarding will be considered a plus. Hedge fund experience is also desirable for this role. Interested candidates are encouraged to apply directly to SS&C Technologies, Inc. or its affiliated companies. Please note that unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services will not be accepted unless explicitly requested or approached by the company.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Business Intelligence Specialist at Adobe, you will have the opportunity to work closely with Business analysts to understand design specifications and translate requirements into technical models, dashboards, reports, and applications. Your role will involve collaborating with business users to cater to their ad-hoc requests and deliver scalable solutions on MSBI platforms. You will be responsible for system integration of data sources, creating technical documents, and ensuring data and code quality through standard methodologies and processes. To succeed in this role, you should have at least 3 years of experience in SSIS, SSAS, Data Warehousing, Data Analysis, and Business Intelligence. You should also possess advanced proficiency in Data Warehousing tools and technologies, including databases, SSIS, and SSAS, along with in-depth understanding of Data Warehousing principles and Dimensional Modeling techniques. Hands-on experience in ETL processes, database optimization, and query tuning is essential. Familiarity with cloud platforms such as Azure and AWS, as well as Python or PySpark and Databricks, would be beneficial. Experience in creating interactive dashboards using Power BI is an added advantage. In addition to technical skills, strong problem-solving and analytical abilities, quick learning capabilities, and excellent communication and presentation skills are important for this role. A Bachelor's degree in Computer Science, Information Technology, or an equivalent technical discipline is required. At Adobe, we value a free and open marketplace for all employees and provide internal growth opportunities for your career development. We encourage creativity, curiosity, and continuous learning as part of your career journey. To prepare for internal opportunities, update your Resume/CV and Workday profile, explore the Internal Mobility page on Inside Adobe, and check out tips to help you prep for interviews. The Talent Team will reach out to you within 2 weeks of applying for a role via Workday, and if you move forward in the interview process, inform your manager for support in your career growth. Join Adobe to work in an exceptional environment with colleagues committed to helping each other grow through ongoing feedback. If you are looking to make an impact and grow your career, Adobe is the place for you. Discover more about employee experiences on the Adobe Life blog and explore the meaningful benefits we offer. For any accommodation needs during the application process, please contact accommodations@adobe.com.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

delhi

On-site

You will be responsible for leading and mentoring a team of data engineers to ensure high-quality delivery across various projects. Your role will involve designing, building, and optimizing large-scale data pipelines and integration workflows using Azure Data Factory (ADF) and Synapse Analytics. Additionally, you will be tasked with architecting and implementing scalable data solutions on Azure cloud, leveraging tools such as Databricks and Microsoft Fabric. Writing efficient and maintainable code using PySpark and SQL for data transformations will be a key part of your responsibilities. Collaboration with data architects, analysts, and business stakeholders to define data strategies and requirements is crucial. You will also be expected to implement and promote Data Mesh principles within the organization, provide architectural guidance, and offer solutions for new and existing data projects on Azure. Ensuring data quality, governance, and security best practices are followed, and staying updated with evolving Azure services and data technologies are essential aspects of the role. In terms of required skills and experience, you should possess at least 6 years of professional experience in data engineering and solution architecture. Expertise in Azure Data Factory (ADF) and Azure Synapse Analytics is necessary. Strong hands-on experience with Databricks, PySpark, and advanced SQL is also expected. A good understanding of Microsoft Fabric and its use cases, along with a deep knowledge of Azure cloud services related to data storage, processing, and integration, will be beneficial. Familiarity with Data Mesh architecture and distributed data product ownership is desirable. Strong problem-solving and debugging skills, as well as excellent communication and stakeholder management abilities, are essential for this role. It would be advantageous to have experience with CI/CD pipelines for data solutions, knowledge of data security and compliance practices on Azure, and a certification in Azure Data Engineering or Solution Architecture.,

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Strong working experience in Data Engineering and Big Data platforms. Hands-on experience with Python and PySpark. Expertise with AWS Glue, including Crawlers and Data Catalog. Hands-on experience with Snowflake. Strong understanding of AWS services: S3, Lambda, Athena, SNS, Secrets Manager. Experience with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform. Strong experience with CI/CD pipelines, preferably using GitHub Actions. Working knowledge of Agile methodologies, JIRA, and GitHub version control. Experience with data quality frameworks and observability. Exposure to data governance tools and practices. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation. Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Portfolio Data Analyst at Addepar, you will play a crucial role in integrating client portfolio data into our leading portfolio management products. Your responsibilities will include analyzing and onboarding portfolio data from various sources, partnering with internal teams to deliver high-quality data solutions, and working with 3rd party data providers to support data integration into our platform. You will collaborate with engineering, product management, and data operations to ensure timely and reliable data solutions that meet client needs. Your role will involve developing data aggregations and functionality based on user workflow needs, as well as working on initiatives to improve overall data management and integration. You will also contribute to the evolution of Addepar's financial concordance solutions to better serve clients in wealth management and beyond. To excel in this role, you should have a minimum of 3+ years of experience working with financial data and concepts relevant to Addepar's clients and products, particularly in wealth management and portfolio management. Technical skills in tools like Excel, SQL, Python, pyspark, Databricks, or other financial services systems are preferred. Strong communication and interpersonal skills are essential for working effectively with vendors, clients, and internal partners. This position requires you to work from Addepar's Pune office three days a week as part of a hybrid work model. Addepar values a diverse and inclusive workplace, where individuals from different backgrounds and identities come together to drive innovative solutions. As an equal opportunity employer, Addepar is committed to promoting a welcoming environment where inclusion and belonging are shared responsibilities. If you are passionate about finance and technology, enjoy solving complex problems in investment management, and have experience in data analysis workflows and tools, this role offers you the opportunity to build on your existing expertise in the investment domain.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a skilled professional in the field of Big Data and Analytics, you will be responsible for utilizing your expertise to drive impactful solutions for Standard Chartered Bank. Your role will involve leveraging your proficiency in various technologies and frameworks such as Hadoop, HDFS, HIVE, SPARK, Bash Scripting, SQL, and others. Your ability to handle raw and unstructured data while adhering to coding standards and software development life cycles will be crucial in ensuring the success of the projects you are involved in. In addition to your technical skills, you will also play a key role in Regulatory & Business Conduct by embodying the highest standards of ethics and compliance. Your responsibilities will include identifying and mitigating risks, as well as ensuring compliance with relevant laws and regulations. Collaborating effectively with FCSO development teams and FCSO Business stakeholders will be essential to achieving the desired outcomes. Your technical competencies in areas such as Hadoop, Apache Hive, PySpark, SQL, Azure DevOps, and Control M will be instrumental in fulfilling the responsibilities of this role. Your action-oriented approach, ability to collaborate, and customer focus will further contribute to your success in this position. Standard Chartered Bank is committed to fostering a diverse and inclusive work environment where each individual's unique talents are celebrated. By joining our team, you will have the opportunity to make a positive impact and drive commerce and prosperity through our valued behaviors. If you are passionate about utilizing your skills to create meaningful change and grow professionally, we invite you to be a part of our dynamic team at Standard Chartered Bank.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer at our IT Services Organization, you will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Python. Your role will involve designing and implementing Big Data solutions that integrate data from various sources, including RDBMS, NoSQL databases, and cloud services. Additionally, you will lead a team of data engineers to ensure efficient project execution and adherence to best practices. Your key responsibilities will include optimizing Spark jobs for performance and scalability, collaborating with cross-functional teams to gather requirements, and delivering data solutions that meet business needs. You will also be involved in implementing ETL processes and frameworks to facilitate data integration and utilizing cloud data services such as GCP for data storage and processing. Applying Agile methodologies to manage project timelines and deliverables will be an essential part of your role. To excel in this position, you should have proficiency in Pyspark and Apache Spark, along with a strong knowledge of Python for data engineering tasks. Hands-on experience with Google Cloud Platform (GCP) and expertise in designing and optimizing Big Data pipelines are crucial. Leadership skills in data engineering team management, understanding of ETL frameworks and distributed computing, familiarity with cloud-based data services, and experience with Agile delivery are also required. We are looking for candidates with a Bachelor's degree in Computer Science, Information Technology, or a related field. It is essential to stay updated with the latest trends and technologies in Big Data and cloud computing to contribute effectively to our projects. If you are passionate about data engineering and eager to work in a dynamic and innovative environment, we encourage you to apply for this exciting opportunity.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a PowerBI Developer, you will be responsible for developing and maintaining scalable data pipelines using Python and PySpark. You will collaborate with data engineers and data scientists to fulfill data processing needs and optimize existing PySpark applications for performance improvements. Writing clean, efficient, and well-documented code following best practices is a crucial part of your role. Additionally, you will participate in design and code reviews, develop and implement ETL processes, and ensure data integrity and quality throughout the data lifecycle. Staying current with the latest industry trends and technologies in big data and cloud computing is essential. The ideal candidate should have a minimum of 6 years of experience in designing and developing advanced Power BI reports and dashboards. Working experience on data modeling, DAX calculations, developing and maintaining data models, creating reports and dashboards, analyzing and visualizing data, ensuring data governance and compliance, as well as troubleshooting and optimizing Power BI solutions. Preferred skills for this role include strong proficiency in Power BI Desktop, DAX, Power Query, and data modeling. Experience in analyzing data, creating visualizations, building interactive dashboards, connecting to various data sources, and transforming data is highly valued. Excellent communication and collaboration skills are necessary to work effectively with stakeholders. Familiarity with SQL, data warehousing concepts, and experience with UI/UX development would be beneficial.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an exceptional, innovative, and passionate individual seeking to grow with NTT DATA, a trusted global innovator of business and technology services. As a Systems Integration Senior Analyst based in Hyderabad, Telangana (IN-TG), India (IN), you will join a forward-thinking organization that values inclusivity and adaptability. Your role will involve the following hands-on experiences: - At least 5 years of overall experience, with a minimum of 2 years in Azure Databricks. - Proficiency in Python/Pyspark. - Strong hands-on experience in SQL, including MS SQL Server, SSIS, Stored Procedures, and Views. - Experience in ETL Testing, with a good understanding of all testing concepts. - Familiarity with Agile methodologies. - Excellent communication skills to effectively handle client calls. About NTT DATA: NTT DATA is a $30 billion global leader in business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA fosters innovation, optimization, and transformation for long-term success. With diverse experts in over 50 countries and a robust partner ecosystem, our services encompass business consulting, data and artificial intelligence, industry solutions, application development, infrastructure management, and connectivity. NTT DATA is at the forefront of digital and AI infrastructure globally, ensuring organizations and societies transition confidently into the digital future. As part of the NTT Group, we invest over $3.6 billion annually in R&D to drive sustainable progress. Learn more about us at us.nttdata.com.,

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Embark on a transformative journey as a MI and Reporting Analyst Data Science Assistant Vice President at Barclays, where you'll play a pivotal role in shaping the future. In this role, you will be responsible to provide guidance to other team members on all aspects of Reporting, Information management, Dashboard Creation, Maintenance, Automation, Data Set Curation and Data Set Transformation. Join us in our mission to safeguard our business and customers from financial risks. With competitive benefits and opportunities for career advancement, Barclays is a great place to grow your career in the banking industry. Key Critical Skills Required For This Role Include A large part of this role will be On-going support and enhancement to Warehouse, Reporting, Information management and its design, Dashboard Creation, Maintenance, Automation, Data Set Curation, Data Set Transformation. Given the high exposure to Data, Reporting, Warehouses, Dashboards, this role will be governed by Service Level Agreements and will be responsible for adherence to Data Standards, and Timelines. Delivering in-sights to enable earlier, faster, smarter decisions. This will rely on some Stake-holder management, partnership, building relationships. Another key factor will be the ability to present and articulate a value proposition. Act as a data/technical expert for warehouses and tools used within the department, providing, support to colleagues. Provide guidance to the rest of the team on all issues relating to data sources. Through extensive knowledge of python, pyspark and/or SQL, provide guidance to other team members on all aspects of coding, including efficiency and effectiveness. Be a Subject Matter Expert, and support colleagues in other appropriate teams by sharing knowledge & best practices in coding and data. Currently this role is intended to be an Individual Contributor, however, this can evolve over time. Graduate in any discipline. Ability to work dedicated shifts in the range of 12 Noon IST to 12 AM IST. Minimum 6 Years experience in Data Science Domain (Analytics and Reporting). Ability to write and study and correct Python, SQL code is mandatory. Ability to work with Big Data. You may be assessed on key essential skills relevant to succeed in role, such as strong knowledge on Python, SQL, Big Data, Power BI, Tableau strategic thinking as well as job-specific technical skills. This role is based out of our Noida office. Purpose of the role To implement data quality process and procedures, ensuring that data is reliable and trustworthy, then extract actionable insights from it to help the organisation improve its operation, and optimise resources. Accountabilities Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification. Execution of data cleansing and transformation tasks to prepare data for analysis. Designing and building data pipelines to automate data movement and processing. Development and application of advanced analytical techniques, including machine learning and AI, to solve complex business problems. Documentation of data quality findings and recommendations for improvement. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Sykatiya Technology Pvt Ltd is a leading Semiconductor Industry innovator committed to leveraging cutting-edge technology to solve complex problems. We are currently looking for a highly skilled and motivated Data Scientist to join our dynamic team and contribute to our mission of driving innovation through data-driven insights. As the Lead Data Scientist and Machine Learning Engineer at Sykatiya Technology Pvt Ltd, you will play a crucial role in analyzing large datasets to uncover patterns, develop predictive models, and implement AI/ML solutions. Your responsibilities will include working on projects involving neural networks, deep learning, data mining, and natural language processing (NLP) to drive business value and enhance our products and services. Key Responsibilities: - Lead the design and implementation of machine learning models and algorithms to address complex business problems. - Utilize deep learning techniques to enhance neural network models and enhance prediction accuracy. - Conduct data mining and analysis to extract actionable insights from both structured and unstructured data. - Apply natural language processing (NLP) techniques for advanced text analytics. - Develop and maintain end-to-end data pipelines, ensuring data integrity and reliability. - Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. - Mentor and guide junior data scientists and engineers in best practices and advanced techniques. - Stay updated with the latest advancements in AI/ML, neural networks, deep learning, data mining, and NLP. Technical Skills: - Proficiency in Python and its libraries such as NumPy, pandas, sci-kit-learn, TensorFlow, Keras, and PyTorch. - Strong understanding of machine learning algorithms and techniques. - Extensive experience with neural networks and deep learning frameworks. - Hands-on experience with data mining and analysis techniques. - Proficiency in natural language processing (NLP) tools and libraries like NLTK, spaCy, and transformers. - Proficiency in Big Data Technologies including Sqoop, Hadoop, HDFS, Hive, and PySpark. - Experience with Cloud Platforms such as AWS services like S3, Step Functions, EventBridge, Athena, RDS, Lambda, and Glue. - Strong knowledge of Database Management systems like SQL, Teradata, MySQL, PostgreSQL, and Snowflake. - Familiarity with Other Tools like ExactTarget, Marketo, SAP BO, Agile, and JIRA. - Strong Analytical Skills to analyze large datasets and derive actionable insights. - Excellent Problem-Solving Skills with the ability to think critically and creatively. - Effective Communication Skills and teamwork abilities to collaborate with various stakeholders. Experience: - At least 8 to 12 years of experience in a similar role.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

indore, madhya pradesh

On-site

You should have 6-8 years of hands-on experience with Big Data technologies such as pySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should possess good hands-on experience with python and Bash Scripts, along with a solid understanding of SQL and data warehouse concepts. Strong analytical, problem-solving, data analysis, and research skills are crucial for this role. It is essential to have a demonstrable ability to think creatively and independently, beyond relying solely on readily available tools. Excellent communication, presentation, and interpersonal skills are a must for effective collaboration within the team. Hands-on experience with Cloud Platform provided Big Data technologies like IAM, Glue, EMR, RedShift, S3, and Kinesis is required. Experience in orchestrating with Airflow and any job scheduler is highly beneficial. Familiarity with migrating workloads from on-premise to cloud and cloud to cloud migrations is also desired. In this role, you will be responsible for developing efficient ETL pipelines based on business requirements while adhering to development standards and best practices. Integration testing of different pipelines in AWS environment and providing estimates for development, testing, and deployments on various environments will be part of your responsibilities. Participation in code peer reviews to ensure compliance with best practices is essential. Creating cost-effective AWS pipelines using necessary AWS services like S3, IAM, Glue, EMR, Redshift, etc., is a key aspect of this position. Your experience should range from 6 to 8 years in relevant fields. The job reference number for this position is 13024.,

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies