Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
10 - 14 Lacs
Chennai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the application development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and mentorship to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with data processing frameworks like Apache Spark- Hands-on experience in designing and implementing scalable data pipelines- Solid grasp of data manipulation and transformation techniques Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Chennai office- An Engineering graduate preferably Computer Science graduate with 15 years of full-time education is required Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 6 days ago
5.0 - 10.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the application development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with cloud platforms like AWS or Azure- Hands-on experience in designing and implementing scalable applications- Knowledge of data modeling and database management Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Pune office- A Engineering graduate preferably Computer Science graduate 15 years of full time education is required Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
_VOIS Intro About _VOIS: _VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. _VOIS Centre Intro About _VOIS India: In 2009, _VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, _VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Role Related Content (Role specific) Ey Responsibilities Design and Build Data Pipelines:** Develop scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Create efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Build and manage applications using Python, SQL, Databricks, and various AWS technologies. Utilize QuickSight to create insightful data visualizations and dashboards. Quickly develop innovative Proof-of-Concept (POC) solutions to address emerging needs. Provide support and manage the ongoing operation of data services. Automate repetitive tasks and build reusable frameworks to improve efficiency. Work with teams to design and develop data products that support marketing and other business functions. Ensure data services are reliable, maintainable, and seamlessly integrated with existing systems. Required Skills And Experience Bachelor’s degree in Computer Science, Engineering, or a related field. Technical Skills: Proficiency in Python with Pandans, PySpark. Hands-on experience with AWS services including S3, Glue Lambda, API Gateway, and SQS. Knowledge of data processing tools like Spark, Hive, Kafka, and Airflow. Experience with batch job scheduling and managing data dependencies. Experience with QuickSight or similar tools. Familiarity with DevOps automation tools like GitLab, Bitbucket, Jenkins, and Maven. Understanding of Delta is would be an added advantage. _VOIS Equal Opportunity Employer Commitment India _VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 6 days ago
5.0 - 9.0 years
12 - 22 Lacs
Mohali
Remote
Company Overview: As Ensemble Health Partners Company, we're at the forefront of innovation, leveraging cutting-edge technology to drive meaningful impact in the Revenue Cycle Management landscape. Our future-forward technology combines tightly integrated data ingestion, workflow automation and business intelligence solutions on a modern cloud architecture. We have the second-largest share in the RCM space in the US Market with 10000+ professionals working in the organization. With 10 Technology Patents in our name, we believe the best results come from a combination of skilled and experienced team, proven and repeatable processes, and modern and flexible technologies. As a leading player in the industry, we offer an environment that fosters growth, creativity, and collaboration, where your expertise will be valued, and your contributions will make a difference. Role & responsibilities : Experience : 5-9 Years Location : remote/wfh Position Summary : Design and maintain scalable data pipelines, manage ETL processes and data warehouses, ensure data quality and governance, collaborate with cross-functional teams, support machine learning deployment, lead projects, mentor juniors, work with big data and cloud technologies, and bring expertise in Spark, Databricks, Streaming/Reactive/Event-driven systems, Agentic programming, and LLM application development. Required Skills : Spark, Databricks, Streaming/Reactive /Event driven, Agentic programming & LLM Application Experience 5+ years of coding experience with Microsoft SQL. 3+ years working with big data technologies including but not limited to Databricks, Apache Spark, Python, Microsoft Azure (Data Factory, Dataflows, Azure Functions, Azure Service Bus) with a willingness and ability to learn new ones Excellent understanding of engineering fundamentals: testing automation, code reviews, telemetry, iterative delivery and DevOps Experience with polyglot storage architectures including relational, columnar, key-value, graph or equivalent Experience with Delta tables as well as Parquet files stored in ADLS Experience delivering applications using componentized and distributed architectures using event driven patterns Demonstrated ability to communicate effectively to both technical and non-technical, globally distributed audiences Solid foundations in formal architecture, design patterns and best practices Experience working with healthcare datasets Why Join US? We adapt emerging technologies to practical uses to deliver concrete solutions that bring maximum impact to providers bottom line. We currently have 10 Technology Patents in our name. We offer you a great organization to work for, where you will get to do best work of your career and grow with the team that is shaping the future of Revenue Cycle Management. We have our strong focus on Learning and development. We have the best Industry standard professional development policies to support the learning goals of our associates. We have flexible/ remote working/ working from home options Benefits H ealth Benefits and Insurance Coverage for family and parents. Accidental Insurance for the associate. Compliant with all Labor Laws- Maternity benefits, Paternity Leaves. Company Swags- Welcome Packages, Work Anniversary Kits Exclusive Referral Policy Professional Development Program and Reimbursements. Remote work flexibility to work from home. Please share your resume on yash.arora@ensemblehp.com with current ctc, expected ctc, notice period.
Posted 6 days ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.
Posted 6 days ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role This position is part of the APM team in Corporate Solutions. You will play an important role in driving high quality analytics & insights by combining your technical and analytical skills with the insurance knowledge of the broader organization. As a Portfolio Analyst, you will contribute to and support data management and visualization activities using Palantir technologies. About The Team he APM team is comprised of a mix of highly experienced members already entrenched in advanced analytics and highly motivated newer members who thrives in a dynamic and fast learning environment. This team reports into the Chief Underwriting Officer and is part of the Actuarial Portfolio Management Unit. About You Convincing interpersonal skills and ability to maintain effective working relations in a multi-disciplinary and multi-cultural environment Self-starter, organized, and able to handle multiple priorities and meet deadlines. Able to apply quantitative skills, business knowledge, logical thinking and communicate the key message by visualization & presentation. Roles & Responsibilities Design and implement data pipelines to extract, transform, and load data from various sources into dataset. Build and maintain dashboards which communicate policy, costing and related insights. Collaborating with peers and senior team members to optimize data management processes. Performing data quality checks and troubleshooting. Maintaining comprehensive documentation and data lineage across multiple systems. Contributing to developing and implementing data analytics solutions strategy. Providing support to the end consumers of the data. About You Professional experience Minimum 2-3 years of hands-on work experience in data field Hands-on experience in building ETL data pipelines is required. Proficiency with Python, PySpark, and SQL, or similar programming and query languages. TypeScript is a plus. Ability to pick up new technologies quickly. Experience with Palantir technologies is a plus. Demonstrated ability to analyze complex data-related challenges and to identify effective solutions. Experience with Scrum/Agile development methodologies is a plus. A Bachelor’s or Master’s degree in computer science, data or software engineering, or equivalent work experience. Personal Skills You are motivated to focus executing on delivering high quality results on time You can articulate and communicate your work effectively and be comfortable in presenting your work to senior team members and leaders You work for the collective success of the team in close collaboration with senior team members. You are open and dependable; and demonstrate collaboration and intercultural competence. Educational level A Bachelor’s or Master’s degree in computer science, data or software engineering, or equivalent work experience About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords Reference Code: 133934 Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Juniper, we believe the network is the single greatest vehicle for knowledge, understanding, and human advancement the world has ever known. To achieve real outcomes, we know that experience is the most important requirement for networking teams and the people they serve. Delivering an experience-first, AI-Native Network pivots on the creativity and commitment of our people. It requires a consistent and committed practice, something we call the Juniper Way. Position: Supply Chain Data Engineer Experience :7+ years Location: Bangalore About the Position: Juniper's Supply Chain Operations is a data-driven organization and the demand for Data Engineering, Data Science and Analytics solutions for decision-making has increased 4x over the last 3 years. In addition, continuous changes in regulatory environment and geo-political issues call for a very flexible and resilient supply chain requiring many new data driven use cases. We need a self-motivated team player for this critical role in the Data Analytics Team to continue to satisfy and fulfill the growing demand for data and data driven solutions including developing AI solutions on top of SCO data stack. Responsibilities: As a member of the SCO Analytics team, this role will be responsible for implementing and delivering Business Intelligence initiatives in supply chain operations. This role will be responsible for collaborating with key business users, developing key metrics and reports and preparing the underlying data using new automated data preparation tools like Alteryx. etc. This role will also interface with Juniper Enterprise IT for seamless delivery of integrated solutions. Major responsibilities include leading/delivering Data Science & Business Intelligence initiatives in supply chain operations, collaborating with key business users, developing insightful analytical models, metrics and reports, coordinating with Juniper Enterprise IT for seamless delivery of system-based solutions. Minimum Qualifications: Bachelor’s degree 7 + years Hands on skills and understanding of Reporting Solutions and Data Models Building end-end Data Engineering pipelines for Semi and unstructured data (Text, all kinds of simple/complex table structures, images, video and audio data) Python, Pyspark, SQL, RDBMS Data Transformation (ETL/ELT) activities SQL Data warehouse (e.g. Snowflake) working / preferably administration Techno-functional system analysis skills including requirements documentation, use case definition, testing methodologies Experience in managing Data Quality and Data Catalog solutions Ability to learn and adapt the Juniper end to end business processes Strong interpersonal, written and verbal communication Preferred Qualifications: Working Experience in analytics solutions like Snowflake, Tableau, Databricks, Alteryx and SAP Business Objects Tools is preferred. Understanding of Supply Chain business processes and its integration with other areas of business Personal Skills: Ability to collaborate cross-functionally and build sound working relationships within all levels of the organization Ability to handle sensitive information with keen attention to detail and accuracy. Passion for data handling ethics. Effective time management skills and ability to solve complex technical problems with creative solutions while anticipating stakeholder needs and helping meet or exceed expectations Comfortable with ambiguity and uncertainty of change when assessing needs for stakeholders Self-motivated and innovative; confident when working independently, but an excellent team player with a growth-oriented personality Other Information: Relocation is not available for this position Travel requirements for the position 10% About Juniper Networks Juniper Networks challenges the inherent complexity that comes with networking and security in the multicloud era. We do this with products, solutions and services that transform the way people connect, work and live. We simplify the process of transitioning to a secure and automated multicloud environment to enable secure, AI-driven networks that connect the world. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook. WHERE WILL YOU DO YOUR BEST WORK? Wherever you are in the world, whether it's downtown Sunnyvale or London, Westford or Bengaluru, Juniper is a place that was founded on disruptive thinking - where colleague innovation is not only valued, but expected. We believe that the great task of delivering a new network for the next decade is delivered through the creativity and commitment of our people. The Juniper Way is the commitment to all our colleagues that the culture and company inspire their best work-their life's work. At Juniper we believe this is more than a job - it's an opportunity to help change the world. At Juniper Networks, we are committed to elevating talent by creating a trust-based environment where we can all thrive together. If you think you have what it takes, but do not necessarily check every single box, please consider applying. We’d love to speak with you. Additional Information for United States jobs: ELIGIBILITY TO WORK AND E-VERIFY In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Juniper Networks participates in the E-Verify program. E-Verify is an Internet-based system operated by the Department of Homeland Security (DHS) in partnership with the Social Security Administration (SSA) that allows participating employers to electronically verify the employment eligibility of new hires and the validity of their Social Security Numbers. Information for applicants about E-Verify / E-Verify Información en español: This Company Participates in E-Verify / Este Empleador Participa en E-Verify Immigrant and Employee Rights Section (IER) - The Right to Work / El Derecho a Trabajar E-Verify® is a registered trademark of the U.S. Department of Homeland Security. Juniper is an Equal Opportunity workplace. We do not discriminate in employment decisions on the basis of race, color, religion, gender (including pregnancy), national origin, political affiliation, sexual orientation, gender identity or expression, marital status, disability, genetic information, age, veteran status, or any other applicable legally protected characteristic. All employment decisions are made on the basis of individual qualifications, merit, and business need. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Digital Solutions and Innovation (DSI) team within the Citi Internal Audit Innovation function is looking for a Business Analytics Analyst (Officer) to join the Internal Audit Analytics Team. The Analytics Team works with members of Internal Audit to identify opportunities, design, develop and implement analytics in support or the performance of audit activities, along with automation activities to promote efficiencies and expand coverage . The candidate must be proficient in the development and use of analytics technology and tools to provide analytical insight and automated solutions to enhance audit efficiency and effectiveness and have functional knowledge of banking processes and related risks and controls. Key Responsibilities: Participating in the innovative use of audit analytics through direct participation in all phases of audits Supporting the defining of data needs, designing, and executing audit analytics during audits in accordance with the audit methodology and professional standards. Supports execution of automated routines to help focus audit testing. Executes innovation solutions and pre-defined analytics in accordance with standard A&A procedures. Assisting audit teams in performing moderately complex audits related to a specific area of the bank: Consumer Banking, Investment Banking, Risk, Finance, Compliance, and/or Technology. Provide support to other members of the Analytics and Automation team, and wider Digital Solutions and Innovation team. Strong verbal and written communication skills to clearly articulate analytics requirements and results. Develop professional relationships with audit teams to assist in the definition of analytics and automation opportunities. Develop effective working relationships with technology and business teams of the area being audited, to facilitate understanding or processes and sourcing of data. Promoting continuous improvement in all aspects of audit automation activities (e.g., technical environment, software, operating procedures). Key Qualifications And Competencies At least 3 years of business / audit analyst experience in providing analytical techniques and automated solutions to business needs. Work experience in global environment and in large company. Excellent technical, programming and databases skills Excellent analytical ability to understand business processes and related risks and controls and develop innovative audit analytics based upon audit needs. Strong interpersonal and multicultural skills for interfacing with all levels of internal and external audit and management. Self-driven, problem-solving approach. Understanding of procedures and following these to keep quality and security of processes. Detail oriented approach, consistently performing diligent self-reviews of work product, and attention to data completeness and accuracy. Data literate, with the ability to understand and effectively communicate what data means to technical and non-technical stakeholders. Proficiency in one or more of the following technical skills is required : SQL Python Hadoop ecosystem (Hive, Sqoop, PySpark etc). Alteryx Proficiency in at least one of the following Data Visualization tools is a plus: Tableau MicroStrategy Cognos Experience of the following areas would be a plus: Business Intelligence including use of statistics, data modelling, data mining and predictive analytics. Application of data science tools and techniques to advance the insights obtained through the interrogation of data. Working with non-structured data such as PDF files. Banking Businesses (i.e., Institutional Clients Group, Consumer, Corporate Functions) or areas of expertise (i.e. Anti-Money Laundering, Regulatory Reporting) Big Data analysis including big data dedicated use like HUE, Hive Project Management / Solution Development Life Cycle Exposure to Process mining software such as Celonis What we offer: A chance to develop in a highly innovative environment where you can use the newest technologies in a top-quality organizational culture. Professional development in a truly global environment Inclusive and friendly corporate culture where gender diversity and equality is widely recognized A supportive workplace for professionals returning to the office from childcare leave An enjoyable and challenging learning path, which leads to a deep understanding of Citi’s products and services. Yearly discretionary bonus and competitive social benefits (private medical care, multisport, life insurance, award-winning pension scheme, holiday allowance, flexible working schedule and other) This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Business Analysis ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Position Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize data pipelines in the Azure environment for performance, scalability, and reliability. Ensure data quality and integrity through data validation techniques and frameworks. Develop and maintain documentation for data processes, configurations, and best practices. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. Manage the CI/CD process for deploying and maintaining data solutions. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals. 3-5 years experience At least 2 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2. Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL). Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing). Experience with big data technologies (e.g., Spark). Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications Learning agility Technical Leadership Consulting and managing business needs Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted. Experience building spark applications utilizing PySpark. Experience with file formats such as Parquet, Delta, Avro. Experience efficiently querying API endpoints as a data source. Understanding of the Azure environment and related services such as subscriptions, resource groups, etc. Understanding of Git workflows in software development. Using Azure DevOps pipeline and repositories to deploy and maintain solutions. Understanding of Ansible and how to use it in Azure DevOps pipelines. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
End-to-end development and delivery of MIS reports and dashboards supporting credit card and lending portfolio acquisition, early engagement, existing customer management, rewards, retention and attrition. Partner with business stakeholders to understand requirements and deliver actionable insights through automated reporting solutions. Maintain and optimize existing SAS-based reporting processes while leading the migration to Python/PySpark on Big Data platforms. Design and build interactive dashboards in Tableau for senior leadership and regulatory reporting. Build and implement an automated audit framework to ensure data accuracy, completeness and consistency across the entire reporting life cycle. Collaborate with Data Engineering and IT teams to leverage data lakes and enterprise data platforms. Mentor junior analysts and contribute to knowledge sharing across teams. Support ad-hoc analysis and audits with quick turnaround and attention to data integrity. Qualifications: Experience in MIS reporting, data analytics, or BI in the Banking/Financial Services, with a strong focus on Credit Cards. Proficiency in SAS for data extraction, manipulation, and automation. Advanced skills in Python and PySpark, particularly in big data environments (e.g., Hadoop, Hive, Databricks). Expertise in Tableau for dashboard design and data storytelling. ------------------------------------------------------ Job Family Group: Management Development Programs ------------------------------------------------------ Job Family: Undergraduate ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Hinjewadi
Work from Office
Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Business Analytics Analyst 1 is a trainee professional role. Requires a good knowledge of the range of processes, procedures and systems to be used in carrying out assigned tasks and a basic understanding of the underlying concepts and principles upon which the job is based. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Makes evaluative judgements based on the analysis of factual information. They are expected to resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Must be able to exchange information in a concise way as well as be sensitive to audience diversity. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own job. TTS Analytics C09 What do we do? The TTS Analytics team provides analytical insights to the Product, Pricing, Client Experience and Sales functions within the global Treasury & Trade Services business. The team works on business problems focused on driving acquisitions, cross-sell, revenue growth & improvements in client experience. The team extracts relevant insights, identifies business opportunities, converts business problems into analytical frameworks, uses big data tools and AI/ML techniques to drive data driven business outcomes in collaboration with business and product partners. Role Description The role will report to the AVP or the VP leading the team The role will involve working on multiple analyses through the year on business problems across the client life cycle – acquisition, engagement, client experience and retention – for the TTS business This will involve leveraging multiple analytical approaches, tools and techniques, working on multiple data sources (client profile & engagement data, transactions & revenue data, digital data, unstructured data like call transcripts, etc.) to provide data driven insights to business partners and functional stakeholders. Qualifications: Bachelor’s Degree with 2-4 years of experience in data analytics, or Masters Degree with 0-2 years of experience in data analytics Demonstrate ability to solve problem Customer service skills High attention to detail Education: Bachelors/University degree or equivalent experience 0-2 years of experience (for Masters) 2-4 years for Bachelors Skills: Analytical Skills: Strong logical reasoning and problem solving ability Proficient in converting business problems into analytical tasks, and analytical findings into business insights Proficient in formulating analytical methodology, identifying trends and patterns with data Has the ability to work hands-on to retrieve and manipulate data from big data environments Tools and Platforms: knowledge in Python, SQL, PySpark and related tools Proficient in MS Excel, PowerPoint Good to have: Experience with Tableau or other visualization tools This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Business Analysis ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 6 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner . What You’ll Be DOING What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to ensure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Pyspark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to ensure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to Technical Lead. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products(ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 6 days ago
5.0 - 10.0 years
0 Lacs
Pune, Bengaluru
Hybrid
Job Summary We are seeking a highly skilled Hadoop Developer / Lead Data Engineer to join our data engineering team based in Bangalore or Pune. The ideal candidate will have extensive experience with Hadoop ecosystem technologies and cloud-based big data platforms, particularly on Google Cloud Platform (GCP). This role involves designing, developing, and maintaining scalable data ingestion, processing, and transformation frameworks to support enterprise data needs. Minimum Qualifications Bachelor's degree in computer science, Computer Information Systems, or related technical field. 5-10 years of experience in software engineering or data engineering, with a strong focus on big data technologies. Proven experience in implementing software development life cycles (SDLC) in enterprise environments. Technical Skills & Expertise Big Data Technologies: Expertise in Hadoop platform, Hive , and related ecosystem tools. Strong experience with Apache Spark (using SQL, Scala, and/or Java). Experience with real-time data streaming using Kafka . Programming Languages & Frameworks: Proficient in PySpark and SQL for data processing and transformation. Strong coding skills in Python . Cloud Technologies (Google Cloud Platform): Experience with BigQuery for data warehousing and analytics. Familiarity with Cloud Composer (Airflow) for workflow orchestration. Hands-on with DataProc for managed Spark and Hadoop clusters. Responsibilities Design, develop, and implement scalable data ingestion and transformation pipelines using Hadoop and GCP services. Build real-time and batch data processing solutions leveraging Spark, Kafka, and related technologies. Ensure data quality, governance, and lineage by implementing automated validation and classification frameworks. Collaborate with cross-functional teams to deploy and operationalize data analytics tools at enterprise scale. Participate in production support and on-call rotations to maintain system reliability. Follow established SDLC practices to deliver high-quality, maintainable solutions. Preferred Qualifications Experience leading or mentoring data engineering teams. Familiarity with CI/CD pipelines and DevOps best practices for big data environments. Strong communication skills with an ability to collaborate across teams.
Posted 6 days ago
4.0 - 9.0 years
10 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
About Us: KPI Partners is a leading provider of data analytics and performance management solutions, dedicated to helping organizations harness the power of their data to drive business success. Our team of experts is at the forefront of the data revolution, delivering innovative solutions to our clients. We are currently seeking a talented and experienced Senior Developer / Lead Data Engineer with expertise in Incorta to join our dynamic team. Job Description: As a Senior Developer / Lead Data Engineer at KPI Partners, you will play a critical role in designing, developing, and implementing data solutions using Incorta. You will work closely with cross-functional teams to understand data requirements, build and optimize data pipelines, and ensure that our data integration processes are efficient and effective. This position requires strong analytical skills, proficiency in Incorta, and a passion for leveraging data to drive business insights. Key Responsibilities: - Design and develop scalable data integration solutions using Incorta. - Collaborate with business stakeholders to gather data requirements and translate them into technical specifications. - Create and optimize data pipelines to ensure high data quality and availability. - Perform data modeling, ETL processes, and data engineering activities to support analytics initiatives. - Troubleshoot and resolve data-related issues across various systems and environments. - Mentor and guide junior developers and data engineers, fostering a culture of learning and collaboration. - Stay updated on industry trends, best practices, and emerging technologies related to data engineering and analytics. - Work with the implementation team to ensure smooth deployment of solutions and provide ongoing support. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related field. - 5+ years of experience in data engineering or related roles with a strong focus on Incorta. - Expertise in Incorta and its features, along with experience in data modeling and ETL processes. - Proficiency in SQL and experience with relational databases (e.g., MySQL, Oracle, SQL Server). - Strong analytical and problem-solving skills, with the ability to work with complex data sets. - Excellent communication and collaboration skills to work effectively in a team-oriented environment. - Familiarity with cloud platforms (e.g., AWS, Azure) and data visualization tools is a plus. - Experience with programming languages such as Python, Java, or Scala is advantageous. Why Join KPI Partners? - Opportunity to work with a talented and passionate team in a fast-paced environment. - Competitive salary and benefits package. - Continuous learning and professional development opportunities. - A collaborative and inclusive workplace culture that values diversity and innovation. KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Join us at KPI Partners and help us unlock the power of data for our clients!
Posted 6 days ago
5.0 - 10.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Hybrid
Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In Your Role, You May Be Responsible For Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage Show more Show less
Posted 6 days ago
2.0 - 5.5 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Role: Associate Tower: Data, Analytics & Specialist Managed Service Experience: 2.0 - 5.5 years Key Skills: AWS Educational Qualification: BE / B Tech / ME / M Tech / MBA Work Location: India.;l Job Description As a Associate, you will work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, personal strengths, and address development areas. Flexible to work in stretch opportunities/assignments. Demonstrate critical thinking and the ability to bring order to unstructured problems. Ticket Quality and deliverables review, Status Reporting for the project. Adherence to SLAs, experience in incident management, change management and problem management. Seek and embrace opportunities which give exposure to different situations, environments, and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. Demonstrate leadership capabilities by working, with clients directly and leading the engagement. Work in a team environment that includes client interactions, workstream management, and cross-team collaboration. Good team player, take up cross competency work and contribute to COE activities. Escalation/Risk management. Position Requirements Required Skills: AWS Cloud Engineer Job description: Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have minimum 2 years hand on experience building advanced Data warehousing solutions on leading cloud platforms. Should have minimum 1-3 years of Operate/Managed Services/Production Support Experience Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data that for downstream consumption like Business Intelligence systems, Analytics modelling, Data scientists etc. Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient, ETL/ELT processes using industry leading tools like AWS, AWS GLUE, AWS Lambda, AWS DMS, PySpark, SQL, Python, DBT, Prefect, Snoflake, etc. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in AWS. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and troubleshooting data pipelines and resolving issues related to data processing, transformation, or storage. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management and Data security) using industry leading tools Scaling and optimizing schema and performance tuning SQL and ETL pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like Informatica, Collibra, Hadoop, Spark, Snowflake etc. Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps etc. Should have Strong communication, problem solving, quantitative and analytical abilities. Nice To Have AWS certification Managed Services- Data, Analytics & Insights Managed Service At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Every day we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our clients are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Data, Analytics & Insights where we focus more so on the evolution of our clients’ Data and Analytics ecosystem. Our focus is to empower our clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Data, Analytics & Insights Managed Service team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Data, Analytics & Insights offerings and engagement including help desk support, enhancement, and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad / Chennai / Pune/ Mumbai (Hybrid) Notice Period : upto 60days About Us: Zemoso Technologies is a Software Product Market Fit Studio that brings Silicon Valley- style rapid prototyping and rapid application builds to Entrepreneurs and Corporate innovation. We offer Innovation as a service and work on ideas from scratch and take them to the Product Market Fit stage using Design Thinking -> Lean Execution -> Agile Methodology. We were featured as one of Deloitte's Fastest 50 growing tech companies from India thrice (2016, 2018, and 2019). We were also featured in Deloitte Technology Fast 500 Asia Pacific both in 2016 and 2018. We are located in Hyderabad, India, Dallas, US & have recently incorporated another office in Waterloo, Canada. What You Will Do: - Develop innovative software solutions using design thinking, lean, and agile methodologies. - Work on high-quality software products using the latest technologies and platforms. - Collaborate with fast-paced, dynamic teams to deliver value-driven client experiences. - Mentor and contribute to the growth of the next generation of developers. Must-Have Skills: - Experience: 3+ years. - Strong proficiency in Python programming language and Django. - Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Nice to Have Qualifications: - Experience with Pandas and PySpark. - Product and customer-centric mindset. - Great Object-Oriented skills, including design patterns. - Good to great problem-solving and communication skills. - Experience in working with cross-border, distributed teams. Get to know us better: https://www.zemosolabs.com Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Circle K (Part of Alimentation Couche-Tard Inc., (ACT)) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has a footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team and the Lead Data Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for deploying analytics algorithms and tools on chosen tech stack for efficient and effective delivery. Responsibilities include delivering insights and targeted action plans, address specific areas of risk and opportunity, work cross-functionally with business and technology teams, and leverage the support of global teams for analysis and data. Roles & Responsibilities Analytics (Data & Insights) Evaluate performance of categories and activities, using proven and advanced analytical methods Support stakeholders with actionable insights based on transactional, financial or customer data on an ongoing basis Oversee the design and measurement of experiments and pilots Initiate and conduct advanced analytics projects such as clustering, forecasting, causal impact Build highly impactful and intuitive dashboards that bring the underlying data to life through insights Operational Excellence Improve data quality by using and improving tools to automatically detect issues Develop analytical solutions or dashboards using user-centric design techniques in alignment with ACT’s protocol Study industry/organization benchmarks and design/develop analytical solutions to monitor or improve business performance across retail, marketing, and other business areas Stakeholder Management Work with Peers, Functional Consultants, Data Engineers, and cross-functional teams to lead / support the complete lifecycle of analytical applications, from development of mock-ups and storyboards to complete production ready application Provide regular updates to stakeholders to simplify and clarify complex concepts, and communicate the output of work to business Create compelling documentation or artefacts that connects business to the solutions Coordinate internally to share key learning with other teams and lead to accelerated business performance Be an advocate for a data-driven culture among the stakeholders Job Requirements Education Bachelor’s degree required , preferably in an analytical discipline like Finance, Mathematics, Statistics, Engineering, or similar Relevant Experience Experience: 7+ years for Lead Data Analyst Relevant working experience in a quantitative/applied analytics role Experience with programming, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Spark / SQL / Python Experience in leading projects and/or leading and mentoring small teams is a plus Excellent communication skills in English, both verbal and written Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Retail Analytics, Supply Chain Analytics, Marketing Analytics, Customer Analytics, etc.) Working understanding of Statistical modelling & Time Series Analysis using Analytical tools (Python, PySpark, R, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), database management systems Business intelligence & reporting (Power BI) Cloud computing services in Azure/AWS/GCP for analytics Show more Show less
Posted 6 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Data Engineering Specialist (11788) Investec is a distinctive Specialist Bank serving clients principally in the UK and South Africa. Our culture gives us our edge: we work hard to find colleagues who'll think out of the ordinary and we put them in environments where they'll flourish. We combine a flat structure with a focus on internal mobility. If you can bring an entrepreneurial spirit and a desire to learn and collaborate to your work, this could be the boost your career deserves. Team Description The Offshore Data Engineering Lead will be responsible for overseeing the data and application development efforts which support our Microsoft Data Mesh Platform. Working as a part of the Investec Central Data Team, the candidate will be responsible for leading development on solutions and applications that support our data domain teams with creation of data products. This role involves driving technical initiatives, exploring new technologies, and enhancing engineering practices within the data teams in-line with the group engineering strategy. The Data Engineering Lead will be a key driver for Investec's move to Microsoft Fabric and other enablement data quality, data management and data orchestration technologies. Key Roles And Responsibilities Lead the development and implementation of data and custom application solutions that support the creation of data products across various data domains. Design, build, and maintain data pipelines using Microsoft Azure Data Platform, Microsoft Fabric and Databricks technologies. Ensure data quality, integrity, and security within the data mesh architecture. Share group engineering context with the CIO and engineers within the business unit continuously. Drive engineering efficiency and enable teams to deliver high-quality software quickly within the business unit Cultivate a culture focused on security, risk management, and best practices in engineering Actively engage with the data domain teams, business units and wider engineering community to promote knowledge sharing Spearhead technical projects and innovation within the business unit's engineering teams and contribute to group engineering initiatives Advance the technical skills of the engineering community and mentor engineers within the business unit Enhance the stability, performance, and security of the business unit's systems. Develop and promote exceptional engineering documentation and practices Build a culture of development and mentorship within the central data team Provide guidance on technology and engineering practices Actively encourages creating Investec open-source software where appropriate within the business unit Actively encourages team members within the business unit to speak at technical conferences based on the work being done Core Skills And Knowledge Proven experience in data engineering, with a strong focus on Microsoft Data Platform technologies, including Azure Data Factory, Azure SQL Database, and Databricks Proficiency in programming languages such as C# and/or Python, with experience in application development being a plus Experience with CI/CD pipelines, Azure, and Azure DevOps Strong experience and knowledge with PySpark and SQL with the ability to create solutions using Microsoft Fabric Ability to create solutions that query and work with web API's In-depth knowledge of Azure, containerisation, and Kubernetes Strong understanding of data architecture concepts, particularly data mesh principles Excellent problem-solving skills and the ability to work independently as a self-starter Strong communication and collaboration skills, with the ability to work effectively in a remote team environment Relevant degree in Computer Science, Data Engineering, or a related field is preferred As part of our collaborative & agile culture, our working week is 4 days in the office and one day remote. Investec offers a range of wellbeing benefits to make our people feel healthier, balanced and more fulfilled in their lives inside and outside of work. Embedded in our culture is a sense of belonging and inclusion. This creates an environment in which everyone is free to be themselves which helps to drive innovation, creativity and ultimately business performance. At Investec we want everyone to find it easy to be themselves, and to feel they belong. It's a responsibility we all share and is integral to our purpose and values as an organisation. Research shows that some candidates can be reluctant to apply to a role unless they meet all the criteria. We pride ourselves on our entrepreneurial spirit here and welcome you to do the same – if the role excites you, please don't let our person specification hold you back. Get in touch! Recite Me We commit to ensure that everyone is fairly assessed during our recruitment process. To assist candidates in completing their application form, Recite Me assistive technology is available on our Careers pages. This can be accessed by clicking on the ‘Accessibility Options' link at the top of the page. The Recite Me tool includes a screen reader, styling and customisation options, a series of reading aids, a translator and more. If you have any form of disability or neurodivergent need and require further assistance in completing your application, please contact the Careers team at CareersIGSI@investec.com who will be happy to assist. Apply Now Loading... Close map Location Mumbai Parinee Crescenzo, 11th floor, G Block BKC, Bandra Kurla Complex, Bandra (E, Mumbai, Maharashtra 400051, India, Mumbai, India Loading... Open In Google Maps Show more Show less
Posted 6 days ago
160.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About PwC: PricewaterhouseCoopers (PwC) is a leading global consulting firm. For more than 160 years, PwC has worked to build trust in society and solve important problems for clients and the communities in which we live and work. Today we have more than 276,000 people across 157 countries working towards this goal. The US Advisory Bangalore Acceleration Center is a natural extension of our United States based consulting capabilities, providing support to a broad range of practice teams. Our US-owned ACs are fully integrated into our client facing teams and are key to PwC's success in the marketplace. Job Summary: At PwC, we are betting big on data, analytics, and a digital revolution to transform the way deals are done. Analytics is increasingly a major driver of competitive advantages in deal-making, and value creation for private equity owned portfolio companies. PwC brings data-driven insights through advanced techniques to help clients make better strategic decisions, uncover value, and improve returns on their investments. The PwC Deal Analytics & Value Creation practice is a blend of deals and consulting professionals with diverse skills and backgrounds, including financial, commercial, operational, and data science. We support private equity and corporate clients across all phases of the deal lifecycle, including diligence, post-deal, and preparation for exit/divestiture. Our data-driven approach delivers insights in diligence at deal speed, works with clients to improve performance post-deal, and brings a commercial insights lens through third-party and alternative data to help inform decisions. A career in our fast-paced deal analytics practice, a business unit within the PwC deals platform, will allow you to work with top private equity and corporate clients across all sectors on complex and dynamic multi-billion-dollar decisions. Each client, deal, and situation is unique, and the ability to translate data into actionable insights for our clients is crucial to our continued success. Job Description As an Experienced Associate, you will work as part of a team of problem solvers, helping solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Share and collaborate effectively with others. Contribute to practice enablement and business development activities Learning new tools and technologies if required. Develop/Implement automation solutions and capabilities that are aligned to client's business requirements Identify and make suggestions for improvements when problems and/or opportunities arise. Handle, manipulate and analyze data and information responsibly. Keep up to date with developments in the area of specialism. Communicate confidently in a clear, concise, and articulate manner - verbally and in the materials you produce. Build and maintain an internal and external network. Seek opportunities to learn about how PwC works as a global network of firms. Uphold the firm's code of ethics and business conduct. Preferred Fields Of Study/Experience Bachelor's/Master’s degree from a reputed institute in Business Administration/Management, Data Science, Data Analytics, Finance, Accounting, Economics, Statistics, Computer and Information Science, Management Information Systems, Engineering, Mathematics A total of 1-4 years of work experience in analytics consulting and/or transaction services Preferred Knowledge/Skills Our team is a blend of deals and consulting professionals with an ability to work with data and teams across our practice to bring targeted commercial and operational insights through industry-specific experience and cutting-edge techniques. We are looking for individuals who demonstrate knowledge and a proven record of success in one or both of the following areas: Business Experience in effectively facilitating day to day stakeholder interactions and relationships based in the US Experience working on high-performing teams preferably in data analytics, consulting, and /or private equity Experience working with business frameworks to analyze markets and assess company position and performance Experience working with alternative data and market data sets to draw insight on competitive positioning and company performance Understanding of financial statements, business cycles (revenue, supply chain, etc.), business diligence, financial modeling, valuation, etc. Experience working in a dynamic, collaborative environment and working under time-sensitive client deadlines Provide insights by understanding the clients' businesses, their industry, and value drivers Strong communication and proven presentation skills Technical High degree of collaboration, ingenuity, and innovation to apply tools and techniques to address client questions Ability to synthesize insights and recommendations into a tight and cohesive presentation to clients Proven track record of data extraction/transformation, analytics, and visualization approaches and a high degree of data fluency Proven skills in the following preferred: Python, Advanced Excel, Alteryx, PowerBI (including visualization and DAX), Pyspark Experience working on GenAI / large language models (LLMs) is a good to have Experience in big data and machine learning concepts Strong track record with leveraging data and business intelligence software to turn data into insights Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.
Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect
In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)
Here are 25 interview questions you may encounter when applying for PySpark roles:
As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.