Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full Stack Data Engineer Lead Analyst at Evernorth, you will be a key player in the Data & Analytics Engineering organization of Cigna, a leading Health Services company. Your role will involve delivering business needs by understanding requirements and deploying software into production. To excel in this position, you should be well-versed in critical technologies, eager to learn, and committed to adding value to the business. Ownership, a thirst for knowledge, and an open mindset are essential attributes for a successful Full Stack Engineer like yourself. In addition to delivery responsibilities, you will be expected to embrace an automation-first and continuous improvement mindset. You will drive the adoption of CI/CD tools and support the enhancement of toolsets and processes. Your ability to articulate clear business objectives aligned with technical specifications and work in an iterative, agile manner will be crucial. Taking ownership and being accountable, writing referenceable and modular code, and ensuring data quality are key behaviors expected from you. Key Characteristics: - Independently design and architect solutions - Demonstrate ownership and accountability - Write referenceable and modular code - Possess fluency in specific areas and proficiency in multiple areas - Exhibit a passion for continuous learning - Maintain a quality mindset to ensure data quality and business impact assessment Required Skills: - Experience in developing data integration and ingestion strategies, including Snowflake cloud data warehouse, AWS S3 buckets, and loading nested JSON formatted data - Strong understanding of snowflake cloud database architecture - Proficiency in big data technologies like Databricks, Hadoop, HiveQL, Spark (Scala/Python) and cloud technologies such as AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR) - Experience in working on Analytical Models and enabling their deployment and production via data and analytical pipelines - Expertise in Query Tuning and Performance improvements - Previous exposure to onsite/offshore setup or model Required Experience & Education: - 8+ years of professional industry experience - Bachelor's degree (or equivalent) - 5+ years of Python scripting experience - 5+ years of Data Management and SQL expertise in Teradata & Snowflake - 3+ years of Agile team experience, preferably with Scrum Desired Experience: - Familiarity with version management tools, with Git being preferred - Exposure to BDD and TDD development methodologies - Experience in an agile CI/CD environment; Jenkins experience is preferred - Knowledge of Health care information domains is advantageous Location & Hours of Work: - (Specify whether the position is remote, hybrid, in-office, and where the role is located as well as the required hours of work) Evernorth is committed to being an Equal Opportunity Employer, actively promoting and supporting diversity, equity, and inclusion efforts throughout the organization. Staff are encouraged to participate in these initiatives to enhance internal practices and external collaborations with diverse client populations.,
Posted 19 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as an Informatica BDM professional at PibyThree Consulting Pvt Ltd. in Pune, Maharashtra. PibyThree is a global cloud consulting and services provider, focusing on Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. The company's goal is to help businesses succeed by leveraging technology for automation and increased productivity. Your responsibilities will include: - Having a minimum of 4+ years of development and design experience in Informatica Big Data Management - Demonstrating excellent SQL skills - Working hands-on with HDFS, HiveQL, BDM Informatica, Spark, HBase, Impala, and other big data technologies - Designing and developing BDM mappings in Hive mode for large volumes of INSERT/UPDATE - Creating complex ETL mappings using various transformations such as Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, Lookups, Filters, Sequence, Router, and Update Strategy - Ability to debug Informatica and utilize tools like Sqoop and Kafka This is a full-time position that requires you to work in-person during day shifts. The preferred education qualification is a Bachelor's degree, and the preferred experience includes a total of 4 years of work experience with 2 years specifically in Informatica BDM.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer with over 4 years of experience, you will be responsible for designing, developing, and maintaining scalable data pipelines to facilitate efficient data extraction, transformation, and loading (ETL) processes. Your role will involve architecting and implementing data storage solutions such as data warehouses, data lakes, and data marts that align with the organization's business needs. It will be crucial for you to implement robust data quality checks and cleansing techniques to ensure data accuracy and consistency. Your responsibilities will also include optimizing data pipelines for enhanced performance, scalability, and cost-effectiveness. Collaboration with data analysts and data scientists to understand data requirements and translate them into technical solutions will be a key aspect of your role. Additionally, you will be required to develop and maintain data security measures to ensure data privacy and regulatory compliance. Automation of data processing tasks using scripting languages like Python and Bash, as well as big data frameworks such as Spark and Hadoop, will be part of your daily tasks. Monitoring data pipelines and infrastructure for performance, as well as troubleshooting any issues that may arise, will be essential. Staying up to date with the latest trends and technologies in data engineering, including cloud platforms like AWS, Azure, and GCP, will also be expected. Documentation of data pipelines, processes, and data models for maintainability and knowledge sharing, as well as contribution to the overall data governance strategy and best practices, will be integral to your role. Qualifications for this position include a strong understanding of data architectures, data modeling principles, and ETL processes. Proficiency in SQL (e.g., MySQL, PostgreSQL) and experience with big data querying languages like Hive and Spark SQL are required. Experience with scripting languages for data manipulation and automation, familiarity with distributed data processing frameworks like Spark and Hadoop, and knowledge of cloud platforms for data storage and processing will be advantageous. Candidates should also possess experience with data quality tools and techniques, excellent problem-solving, analytical, and critical thinking skills, as well as strong communication, collaboration, and teamwork abilities. Key Skills: Spark, Hadoop, Python, Windows Azure, AWS, SQL, HiveQL,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Build your career in the Data, Analytics and Reporting Team, working within the world's most innovative bank that values creativity and excellence. As a Quant Analytics Analyst within the Data Analytics and Reporting Team (DART), you will be responsible for delivering Management Information System (MIS) solutions and supporting daily operations. Your key responsibilities will include supporting day-to-day operations/tasks related to a functional area or business partner, ensuring projects are completed according to established timelines, assembling data, building reports/dashboards, identifying risks and opportunities along with potential solutions to unlock value. To excel in this role, you should have professional experience in a combination of business and relevant MIS/technology/reporting experience. You should possess a certain level of understanding of business operations and procedures and the ability to connect them with business fundamentals. Additionally, you must have hands-on experience and knowledge of querying different databases and other source systems for data analysis required for reporting. Proficiency in creating reports/business intelligence solutions using tools such as Tableau, Cognos, Python, Alteryx, SAS, etc., is essential. Your general desire and aptitude to learn and adapt to new technologies, openness to different perspectives, and ability to anticipate and resolve customer and general issues with a sense of urgency are crucial for this role. Ideally, you should have prior experience in reporting and data analysis development with the ability to meet stringent deadlines. Proficiency in writing/understanding SQL (PL/SQL, T/SQL, PostgreSQL, or similar) and hands-on data analysis experience are also required. Preferred qualifications for this role include a Bachelor's degree or equivalent. Prior experience with call center technology data (Avaya CMS, IVR, Aspect, eWFM), Fraud Operations, CTO Operations, and other Consumer and Community Banking departments is desired. Experience in creating and deploying reports with a BI tool (such as Tableau, Microstrategy, Cognos, SSRS), sourcing and compiling data from a tool with ETL capabilities (such as SSIS, Alteryx, Trifacta, Abinitio, R, SAS), and knowledge of R/Python, Anaconda, HIVEQL, and exposure to Cloud Database will be advantageous for this role.,
Posted 2 weeks ago
10.0 - 15.0 years
30 - 40 Lacs
Bengaluru
Hybrid
We are looking for a Cloud Data Engineer with strong hands-on experience in data pipelines, cloud-native services (AWS), and modern data platforms like Snowflake or Databricks. Alternatively, were open to Data Visualization Analysts with strong BI experience and exposure to data engineering or pipelines. You will collaborate with technology and business leads to build scalable data solutions, including data lakes, data marts, and virtualization layers using tools like Starburst. This is an exciting opportunity to work with modern cloud tech in a dynamic, enterprise-scale financial services environment. Key Responsibilities: Design and develop data pipelines for structured/unstructured data in AWS. Build semantic layers and virtualization layers using Starburst or similar tools. Create intuitive dashboards and reports using Power BI/Tableau. Collaborate on ETL designs and support testing (SIT/UAT). Optimize Spark jobs and ETL performance. Implement data quality checks and validation frameworks. Translate business requirements into scalable technical solutions. Participate in design reviews and documentation. Skills & Qualifications: Must-Have: 10+ years in Data Engineering or related roles. Hands-on with AWS Glue, Redshift, Athena, EMR, Lambda, S3, Kinesis. Proficient in HiveQL, Spark, Python, Scala. Experience with modern data platforms (Snowflake/Databricks). 3+ years in ETL tools (Informatica, SSIS) & recent experience in cloud-based ETL. Strong understanding of Data Warehousing, Data Lakes, and Data Mesh. Preferred: Exposure to Data Virtualization tools like Starburst or Denodo. Experience in financial services or banking domain. AWS Certification (Data specialty) is a plus.
Posted 2 weeks ago
4.0 - 8.0 years
8 - 13 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role Technology Lead No of years experience 5+ Detailed job description - Skill Set: Role Summary: As part of the offshore development team, the AWS Developers will be responsible for implementing ingestion and transformation pipelines using PySpark, orchestrating jobs via MWAA, and converting legacy Cloudera jobs to AWS-native services. Key Responsibilities: Write ingestion scripts (batch & stream) to migrate data from on-prem to S3. Translate existing HiveQL into SparkSQL/PySpark jobs. Configure MWAA DAGs to orchestrate job dependencies. Build Iceberg tables with appropriate partitioning and metadata handling. Validate job outputs and write unit tests. Required Skills: 35 years in data engineering, with strong exposure to AWS. Experience in EMR (Spark), S3, PySpark, SQL. Working knowledge of Cloudera/HDFS and legacy Hadoop pipelines. Prior experience with data lake/lakehouse implementations is a plus Mandatory Skills AWS Developer
Posted 1 month ago
8.0 - 12.0 years
12 - 18 Lacs
Bengaluru
Work from Office
As a Data Architect, you are required to: Design & develop technical solutions which combine disparate information to create meaningful insights for business, using Big-data architectures Build and analyze large, structured and unstructured databases based on scalable cloud infrastructures Develop prototypes and proof of concepts using multiple data-sources and big-data technologies Process, manage, extract and cleanse data to apply Data Analytics in a meaningful way Design and develop scalable end-to-end data pipelines for batch and stream processing Regularly scan the Data Analytics landscape to stay up to date with latest technologies, techniques, tools and methods in this field Stay curious and enthusiastic about using related technologies to solve problems and enthuse others to see the benefit in business domain Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Engineering / Analytics is desirable. Experience level : Minimum 8 years in software development with at least 2 - 3 years hands-on experience in the area of Big-data / Data Engineering. Desired Knowledge & Experience: Data Engineer - Big Data Developer Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design Additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Big Data Architect Expert: in technologies, languages and methodologies mentioned in Data Engineer - Big Data Developer Mentor: mentors/educates Developers in technologies, languages and methodologies mentioned in Data Engineer - Big Data Developer Architecture Styles: Lakehouse, Lambda, Kappa, Delta, Data Lake, Data Mesh, Data Fabric, Data Warehouses (e.g. Data Vault) Application Architecture: Microservices, NoSql, Kubernetes, Cloud-native Experience: Many years of experience with all kinds of technology in the evolution of data platforms (Data Warehouse -> Hadoop -> Big Data -> Cloud -> Data Mesh) Certification: Architect certification (e.g. Siemens Certified Software Architect or iSAQB CPSA) Required Soft-skills & Other Capabilities: Excellent communication skills, in order to explain your work to people who don't understand the mechanics behind data analysis Great attention to detail and the ability to solve complex business problems Drive and the resilience to try new ideas, if the first ones don't work Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough